php - Symfony 1.4 functional test - reduce memory usage -


i have csv file defines routes test, , expected status code each route should return.

i working on functional test iterates on csv file , makes request each route, checks see if proper status code returned.

$browser = new sftestfunctional(new sfbrowser());  foreach ($routes $route) {     $browser->         get($route['path'])->          with('response')->begin()->             isstatuscode($route['code'])->         end()     ;     print(memory_get_usage()); }  /***************  output:  *************************  ok 1 - status code 200 97953280# /first_path ok 2 - status code 200 109607536# /second_path ok 3 - status code 403 119152936# /third_path ok 4 - status code 200 130283760# /fourth_path ok 5 - status code 200 140082888# /fifth_path ...  /***************************************************/ 

this continues until allowed memory exhausted error.

i have increased amount of allowed memory, temporarily solved problem. not permanent solution since more routes added csv file on time.

is there way reduce amount of memory test using?

i faced same out of memory problem. needed crawl long list of uri (around 30k) generate html cache. marek, tried fork processes. there still little leak, insignificant.

as input, had text file 1 line per uri. of course can adapt following script csv.

const number_of_process = 4; const size_of_groups = 5;  require_once(dirname(__file__).'/../../config/projectconfiguration.class.php'); $configuration = projectconfiguration::getapplicationconfiguration('frontend', 'prod', false); sfcontext::createinstance($configuration);  $file = new splfileobject(dirname(__file__).'/list-of-uri.txt');  while($file->valid()) {     $count = 0;     $uris = array();     while($file->valid() && $count < number_of_process * size_of_groups) {         $uris[] = trim($file->current());         $file->next();         $count++;     }     $urisgroups = array_chunk($uris, size_of_groups);      $childs = array();     echo "---\t\t\t forking ".sizeof($urisgroups)." process \t\t\t ---\n";     foreach($urisgroups $urigroup) {         $pid = pcntl_fork();         if($pid == -1)             die('could not fork');         if(!$pid) {             $b = new sfbrowser();             foreach($urigroup $key => $uri) {                 $starttime = microtime(true);                 $b->get($uri);                 $time = microtime(true) - $starttime;                 echo 'mem: '.memory_get_peak_usage().' - '.$time.'s - uri n°'.($key + 1).' pid '.getmypid().' - status: '.$b->getresponse()->getstatuscode().' - uri: '.$uri."\n";             }             exit();         }         if($pid) {             $childs[] = $pid;         }     }      while(count($childs) > 0) {         foreach($childs $key => $pid) {             $res = pcntl_waitpid($pid, $status, wnohang);              // if process has exited             if($res == -1 || $res > 0)                 unset($childs[$key]);         }         sleep(1);     } } 

const number_of_process defining number of parallel processes working (thus, save time if have multi-core processor)

const number_of_process defining number of uri crawled sfbrowser in each process. can decrease if still have out of memory problems


Comments

Popular posts from this blog

Java 8 + Maven Javadoc plugin: Error fetching URL -

css - SVG using textPath a symbol not rendering in Firefox -

php - Google Calendar Events -