Pasteurized whole ewe's and cow's milk was used in the manufacture of Feta end Telemes cheeses, respectively, according to standard procedures. In both cases, the milk had been inoculated with Escherichia coli O157:H7 at a concentration of ca. 5.1 log CFU/ml and with thermophilic or mesophilic starter cultures at a concentration of ca. 5.3 to 5.6 log CFU/ml. In the first 10 h of cheesemaking, the pathogen increased by 1.18 and 0.82 log CFU/g in Feta cheese and by 1.56 and 1.35 log CFU/ g in Telemes cheese for the trials with thermophilic and mesophilic starters, respectively. After 24 h of fermentation, a decrease in E. coli O157:H7 was observed for all trials. At that time, the pH was reduced to 4.81 to 5.10 for all trials. Fresh cheeses were salted and held at 16 degrees C for ripening until the pH was reduced to 4.60. Cheeses were then moved into storage at 4 degrees C to complete ripening. During ripening, the E. coli O157:H7 population decreased significantly (P < or = 0.001) and finally was not detectable in Feta cheese after 44 and 36 days and in Telemes cheese after 40 and 30 days for the trials with thermophilic and mesophilic starters, respectively. The estimated times required for one decimal reduction of the population of E. coli O157:H7 after the first day of processing were 9.71 and 9.26 days for Feta cheese and 9.09 and 7.69 days for Telemes cheese for the trials with thermophilic and mesophilic starters, respectively. 相似文献
Retrieving high-quality endogenous ancient DNA (aDNA) poses several challenges, including low molecular copy number, high rates of fragmentation, damage at read termini, and potential presence of exogenous contaminant DNA. All these factors complicate a reliable reconstruction of consensus aDNA sequences in reads from high-throughput sequencing platforms. Here, we report findings from a thorough evaluation of two alternative tools (ANGSD and schmutzi) aimed at overcoming these issues and constructing high-quality ancient mitogenomes. Raw genomic data (BAM/FASTQ) from a total of 17 previously published whole ancient human genomes ranging from the 14th to the 7th millennium BCE were retrieved and mitochondrial consensus sequences were reconstructed using different quality filters, with their accuracy measured and compared. Moreover, the influence of different sequence parameters (number of reads, sequenced bases, mean coverage, and rate of deamination and contamination) as predictors of derived sequence quality was evaluated. Complete mitogenomes were successfully reconstructed for all ancient samples, and for the majority of them, filtering substantially improved mtDNA consensus calling and haplogroup prediction. Overall, the schmutzi pipeline, which estimates and takes into consideration exogenous contamination, appeared to have the edge over the much faster and user-friendly alternative method (ANGSD) in moderate to high coverage samples (>1,000,000 reads). ANGSD, however, through its read termini trimming filter, showed better capabilities in calling the consensus sequence from low-quality samples. Among all the predictors of overall sample quality examined, the strongest correlation was found for the available number of sequence reads and bases. In the process, we report a previously unassigned haplogroup (U3b) for an Early Chalcolithic individual from Southern Anatolia/Northern Levant. 相似文献
Operational modal analysis (OMA) is an essential tool for understanding the structural dynamics of offshore wind turbines (OWTs). However, the classical OMA algorithms require the excitation of the structure to be stationary white noise, which is often not the case for operational OWTs due to the presence of periodic excitation caused by rotor rotation. To address this issue, several solutions have been proposed in the literature, including the Kalman filter-based stochastic subspace identification (KF-SSI) method which eliminates harmonics through estimation and orthogonal projection. In this paper, an enhanced version of the KF-SSI method is presented that involves a concatenation step, allowing multiple datasets with similar environmental conditions to be used in the identification process, resulting in higher precision. This enhanced framework is applied to an operational OWT and compared to other OMA methods, such as the modified least-squares complex exponential and PolyMAX. Using field data from a multi-megawatt operational OWT, it is shown that the enhanced framework is able to accurately distinguish the first three bending modes with more stable estimates and lower variance compared to the original KF-SSI algorithm and follows a similar trend compared to other approaches. 相似文献
International Journal of Information Security - Timely detection and effective treatment of cyber-attacks for protecting personal and sensitive data from unauthorized disclosure constitute a core... 相似文献
Dielectric materials with higher energy storage and electromagnetic (EM) energy conversion are in high demand to advance electronic devices, military stealth, and mitigate EM wave pollution. Existing dielectric materials for high-energy-storage electronics and dielectric loss electromagnetic wave absorbers are studied toward realizing these goals, each aligned with the current global grand challenges. Libraries of dielectric materials with desirable permittivity, dielectric loss, and/or dielectric breakdown strength potentially meeting the device requirements are reviewed here. Regardless, aimed at translating these into energy storage devices, the oft-encountered shortcomings can be caused by either of two confluences: a) low permittivity, high dielectric loss, and low breakdown strength; b) low permittivity, low dielectric loss, and process complexity. Contextualizing these aspects and the overarching objectives of enabling high-efficiency energy storage and EM energy conversion, recent advances in by-design inorganic–organic hybrid materials are reviewed here, with a focus on design approaches, preparation methods, and characterization techniques. In light of their strengths and weaknesses, potential strategies to foster their commercial adoption are critically interrogated. 相似文献
Content generation that is both relevant and up to date with the current threats of the target audience is a critical element in the success of any cyber security exercise (CSE). Through this work, we explore the results of applying machine learning techniques to unstructured information sources to generate structured CSE content. The corpus of our work is a large dataset of publicly available cyber security articles that have been used to predict future threats and to form the skeleton for new exercise scenarios. Machine learning techniques, like named entity recognition and topic extraction, have been utilised to structure the information based on a novel ontology we developed, named Cyber Exercise Scenario Ontology (CESO). Moreover, we used clustering with outliers to classify the generated extracted data into objects of our ontology. Graph comparison methodologies were used to match generated scenario fragments to known threat actors’ tactics and help enrich the proposed scenario accordingly with the help of synthetic text generators. CESO has also been chosen as the prominent way to express both fragments and the final proposed scenario content by our AI-assisted Cyber Exercise Framework. Our methodology was assessed by providing a set of generated scenarios for evaluation to a group of experts to be used as part of a real-world awareness tabletop exercise.