Skip to content

permit skipping .lafl_lock files #1220

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

Vincebye
Copy link
Contributor

  • Add lafl_lock attribute to structure InMemoryOnDiskCorpus
  • Add no_lafl_lock function
  • The default attribute lafl_true=true is added to the _new function
  • Rename_testcase and save_testcase increase the judgment of lafl_lock attribute

@Vincebye
Copy link
Contributor Author

#1129

@@ -229,13 +230,27 @@ where
Self::_new(dir_path.as_ref(), None)
}

///Create an [`InMemoryOnDiskCorpus`] that will not store .`lafl_lock` files
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs should clearly state that this will be racey if multiple cores work on the same corpus.

Maybe I'd even rename this function multicore_unsafeor similar. The fact that lafl_lock files are produced is just an impelmentation detail imho, what do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the .lafl_lock file is needed if working on multi-core. In some cases, if there is no competition problem for a single core, the user is allowed not to generate the .lafl_lock file, so I did not consider the competition problem without generating the .lafl_lock file.

*testcase.filename_mut() = Some(file_name);
} else {
// When the lock_file=false,Try to create file for new testcases
let new_file_name = loop {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah actually if you still create a file (and just not name it .lafl_lock(), you will not be racey, the only issue is that we may produce empty files if we quit. I initially thought you would not check if the file exists, at all..
Interesting.
What's your use case then?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite understand why an empty file is generated. I have experimented with the code locally and it will not affect the file generation.

use std::fs::OpenOptions;
use std::{thread, env};



fn create(){
    let mut ctr = 1;
    let mut file_name="file".to_owned();
    let new_file_name = loop {
        let file_path=env::current_dir().unwrap().join(&file_name);
        println!("111Trying with file name: {:?}", file_path);
        if OpenOptions::new()
            .write(true)
            .create_new(true)
            .open(&file_path)
            .is_ok()
        {
            println!("File created: {:?}", file_path);
            break file_name;
        }
        println!("File already exists: {:?}", file_path);
        let new_name=format!("{}-{}",file_name,ctr);
        file_name = new_name;
        println!("Trying with new file name: {:?}", file_name);
        ctr += 1;
    };
}




fn main() {
    let mut handles = vec![];
    
    for i in 0..5 {
        handles.push(thread::spawn(move || {
            create();
        }));
    }
    for handle in handles {
        handle.join().unwrap();
    }
}

@domenukk
Copy link
Member

I think the main reason for the lockfile was that this way, you never end up using the same file twice, from two threads. Like, before moving the file, writing it, removing it, or something like that, you'd create the lock file.
But I'm not 100% sure if it's really needed/has benefits.

@tokatoka tokatoka closed this May 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants