Search results for: gradient boosting machine
2016 Influence of Magnetized Water on the Split Tensile Strength of Concrete
Authors: Justine Cyril E. Nunag, Nestor B. Sabado Jr., Jienne Chester M. Tolosa
Abstract:
Concrete has high compressive strength but a low-tension strength. The small tensile strength of concrete is regarded as its primary weakness, which is why it is typically reinforced with steel, a material that is resistant to tension. Even with steel, however, cracking can occur. In strengthening concrete, only a few researchers have modified the water to be used in a concrete mix. This study aims to compare the split tensile strength of normal structural concrete to concrete prepared with magnetic water and a quick setting admixture. In this context, magnetic water is defined as tap water that has undergone a magnetic process to become magnetized water. To test the hypothesis that magnetized concrete leads to higher split tensile strength, twenty concrete specimens were made. There were five groups, each with five samples, that were differentiated by the number of cycles (0, 50, 100, and 150). The data from the Universal Testing Machine's split tensile strength were then analyzed using various statistical models and tests to determine the significant effect of magnetized water. The result showed a moderate (+0.579) but still significant degree of correlation. The researchers also discovered that using magnetic water for 50 cycles did not result in a significant increase in the concrete's split tensile strength, which influenced the analysis of variance. These results suggest that a concrete mix containing magnetic water and a quick-setting admixture alters the typical split tensile strength of normal concrete. Magnetic water has a significant impact on concrete tensile strength. The hardness property of magnetic water influenced the split tensile strength of concrete. In addition, a higher number of cycles results in a strong water magnetism. The laboratory test results show that a higher cycle translates to a higher tensile strength.Keywords: hardness property, magnetic water, quick-setting admixture, split tensile strength, universal testing machine
Procedia PDF Downloads 1442015 Ab Initio Studies of Structural and Thermal Properties of Aluminum Alloys
Authors: M. Saadi, S. E. H. Abaidia, M. Y. Mokeddem.
Abstract:
We present the results of a systematic and comparative study of the bulk, the structural properties, and phonon calculations of aluminum alloys using several exchange–correlations functional theory (DFT) with different plane-wave basis pseudo potential techniques. Density functional theory implemented by the Vienna Ab Initio Simulation Package (VASP) technique is applied to calculate the bulk and the structural properties of several structures. The calculations were performed for within several exchange–correlation functional and pseudo pententials available in this code (local density approximation (LDA), generalized gradient approximation (GGA), projector augmented wave (PAW)). The lattice dynamic code “PHON” developed by Dario Alfè was used to calculate some thermodynamics properties and phonon dispersion relation frequency distribution of Aluminium alloys using the VASP LDA PAW and GGA PAW results. The bulk and structural properties of the calculated structures were compared to different experimental and calculated works.Keywords: DFT, exchange-correlation functional, LDA, GGA, pseudopotential, PAW, VASP, PHON, phonon dispersion
Procedia PDF Downloads 4842014 Temperature Distribution Inside Hybrid photovoltaic-Thermoelectric Generator Systems and their Dependency on Exposition Angles
Authors: Slawomir Wnuk
Abstract:
Due to widespread implementation of the renewable energy development programs the, solar energy use increasing constantlyacross the world. Accordingly to REN21, in 2020, both on-grid and off-grid solar photovoltaic systems installed capacity reached 760 GWDCand increased by 139 GWDC compared to previous year capacity. However, the photovoltaic solar cells used for primary solar energy conversion into electrical energy has exhibited significant drawbacks. The fundamentaldownside is unstable andlow efficiencythe energy conversion being negatively affected by a rangeof factors. To neutralise or minimise the impact of those factors causing energy losses, researchers have come out withvariedideas. One ofpromising technological solutionsoffered by researchers is PV-MTEG multilayer hybrid system combiningboth photovoltaic cells and thermoelectric generators advantages. A series of experiments was performed on Glasgow Caledonian University laboratory to investigate such a system in operation. In the experiments, the solar simulator Sol3A series was employed as a stable solar irradiation source, and multichannel voltage and temperature data loggers were utilised for measurements. The two layer proposed hybrid systemsimulation model was built up and tested for its energy conversion capability under a variety of the exposure angles to the solar irradiation with a concurrent examination of the temperature distribution inside proposed PV-MTEG structure. The same series of laboratory tests were carried out for a range of various loads, with the temperature and voltage generated being measured and recordedfor each exposure angle and load combination. It was found that increase of the exposure angle of the PV-MTEG structure to an irradiation source causes the decrease of the temperature gradient ΔT between the system layers as well as reduces overall system heating. The temperature gradient’s reduction influences negatively the voltage generation process. The experiments showed that for the exposureangles in the range from 0° to 45°, the ‘generated voltage – exposure angle’ dependence is reflected closely by the linear characteristics. It was also found that the voltage generated by MTEG structures working with the optimal load determined and applied would drop by approximately 0.82% per each 1° degree of the exposure angle increase. This voltage drop occurs at the higher loads applied, getting more steep with increasing the load over the optimal value, however, the difference isn’t significant. Despite of linear character of the generated by MTEG voltage-angle dependence, the temperature reduction between the system structure layers andat tested points on its surface was not linear. In conclusion, the PV-MTEG exposure angle appears to be important parameter affecting efficiency of the energy generation by thermo-electrical generators incorporated inside those hybrid structures. The research revealedgreat potential of the proposed hybrid system. The experiments indicated interesting behaviour of the tested structures, and the results appear to provide valuable contribution into thedevelopment and technological design process for large energy conversion systems utilising similar structural solutions.Keywords: photovoltaic solar systems, hybrid systems, thermo-electrical generators, renewable energy
Procedia PDF Downloads 882013 Influence of Different Rhizome Sizes and Operational Speed on the Field Capacity and Efficiency of a Three–Row Turmeric Rhizome Planter
Authors: Muogbo Chukwudi Peter, Gbabo Agidi
Abstract:
Influence of different turmeric rhizome sizes and machine operational speed on the field capacity and efficiency of a developed prototype tractor-drawn turmeric planter was studied. This was done with a view to ascertaining how the field capacity and field efficiency were affected by the turmeric rhizome lengths and tractor operational speed. The turmeric rhizome planter consists of trapezoidal hopper, grooved cylindrical metering devise, rectangular frame, ground wheels made of mild steel, furrow opener, chain/sprocket drive system, three linkage point seed delivery tube and press wheel. The experiment was randomized in a factorial design of three levels of rhizome lengths (30, 45 and 60 mm) and operational speeds of 8, 10, and 12 kmh-1. About 3 kg cleaned turmeric rhizomes were introduced into each hopper of the planter and were planted 30 m2 of experimental plot. During the field evaluation of the planter, the effective field capacity, field efficiency, missing index, multiple index and percentage rhizome bruise were evaluated. 30.08% was recorded for maximum percentage bruise on the rhizome. The mean effective field capacity ranged between 0.63 – 0.96hah-1 at operational speeds of 8 and 12kmh-1 respectively and 45 mm rhizome length. The result also shows that the mean efficiency was obtained to be 65.8%. The percentage rhizome bruise decreases with increase in operational speed. The highest and lowest percentage turmeric rhizome miss index of 35% were recorded for turmeric rhizome length of 30 mm at a speed of 10 kmhr-1 and 8 kmhr-1, respectively. The potential implications of the experimental result is to determine the optimal machine process conditions for higher field capacity and gross reduction in mechanical injury (bruise) of planted turmeric rhizomes.Keywords: rhizome sizes, operational speed, field capacity. field efficiency, turmeric rhizome, planter
Procedia PDF Downloads 592012 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics
Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur
Abstract:
Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.Keywords: human machine interface, industrial internet of things, internet of things, optical character recognition, video analytics
Procedia PDF Downloads 1082011 Characterization of Nickel Based Metallic Superconducting Materials
Authors: Y. Benmalem , A. Abbad, W. Benstaali, T. Lantri
Abstract:
Density functional theory is used to investigate the.the structural, electronic, and magnetic properties of the cubic anti-perovskites InNNi3 and ZnNNi3. The structure of antiperovskite also called (perovskite-inverse) identical to the perovskite structure of the general formula ABX3, where A is a main group (III–V) element or a metallic element, B is carbon or nitrogen, and X is a transition metal, displays a wide range of interesting physical properties, such as giant magnetoresistance. Elastic and electronic properties were determined using generalized gradient approximation (GGA), and local spin density approximation (LSDA) approaches, ), as implemented in the Wien2k computer package. The results show that the two compounds are strong ductile and satisfy the Born-Huang criteria, so they are mechanically stable at normal conditions. Electronic properties show that the two compounds studied are metallic and non-magnetic. The studies of these compounds have confirmed the effectiveness of the two approximations and the ground-state properties are in good agreement with experimental data and theoretical results available.Keywords: anti-perovskites, elastic anisotropy, electronic band structure, first-principles calculations
Procedia PDF Downloads 2832010 Investigating Convective Boiling Heat Transfer Characteristics of R-1234ze and R-134a Refrigerants in a Microfin and Smooth Tube
Authors: Kaggwa Abdul, Chi-Chuan Wang
Abstract:
This research is based on R-1234ze that is considered to substitute R-134a due to its low global warming potential in a microfin tube with outer diameter 9.52 mm, number of fins 70, and fin height 0.17 mm. In comparison, a smooth tube with similar geometries was used to study pressure drop and heat transfer coefficients related to the two fluids. The microfin tube was brazed inside a stainless steel tube and heated electrically. T-type thermocouples used to measure the temperature distribution during the phase change process. The experimental saturation temperatures and refrigerant mass velocities varied from 10 – 20°C and 50 – 300 kg/m2s respectively. The vapor quality from 0.1 to 0.9, and heat flux ranged from 5 – 11kW/m2. The results showed that heat transfer performance of R-134a in both microfin and smooth tube was better than R-1234ze especially at mass velocities above G = 50 kg/m2s. However, at low mass velocities below G = 100 kg/m2s R-1234ze yield better heat transfer coefficients than R-134a. The pressure gradient of R-1234ze was markedly higher than that of R-134a at all mass flow rates.Keywords: R-1234ze and R-134a, horizontal flow boiling, pressure drop, heat transfer coefficients, micro-fin and smooth tubes
Procedia PDF Downloads 2802009 Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition
Authors: Khadijat T. Bamigbade, Olufade F. W. Onifade
Abstract:
The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild.Keywords: automatic facial expression analysis, local binary pattern, LBP-HOG, occlusion detection
Procedia PDF Downloads 1672008 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 692007 Mapping Context, Roles, and Relations for Adjudicating Robot Ethics
Authors: Adam J. Bowen
Abstract:
Abstract— Should robots have rights or legal protections. Often debates concerning whether robots and AI should be afforded rights focus on conditions of personhood and the possibility of future advanced forms of AI satisfying particular intrinsic cognitive and moral attributes of rights-holding persons. Such discussions raise compelling questions about machine consciousness, autonomy, and value alignment with human interests. Although these are important theoretical concerns, especially from a future design perspective, they provide limited guidance for addressing the moral and legal standing of current and near-term AI that operate well below the cognitive and moral agency of human persons. Robots and AI are already being pressed into service in a wide range of roles, especially in healthcare and biomedical contexts. The design and large-scale implementation of robots in the context of core societal institutions like healthcare systems continues to rapidly develop. For example, we bring them into our homes, hospitals, and other care facilities to assist in care for the sick, disabled, elderly, children, or otherwise vulnerable persons. We enlist surgical robotic systems in precision tasks, albeit still human-in-the-loop technology controlled by surgeons. We also entrust them with social roles involving companionship and even assisting in intimate caregiving tasks (e.g., bathing, feeding, turning, medicine administration, monitoring, transporting). There have been advances to enable severely disabled persons to use robots to feed themselves or pilot robot avatars to work in service industries. As the applications for near-term AI increase and the roles of robots in restructuring our biomedical practices expand, we face pressing questions about the normative implications of human-robot interactions and collaborations in our collective worldmaking, as well as the moral and legal status of robots. This paper argues that robots operating in public and private spaces be afforded some protections as either moral patients or legal agents to establish prohibitions on robot abuse, misuse, and mistreatment. We already implement robots and embed them in our practices and institutions, which generates a host of human-to-machine and machine-to-machine relationships. As we interact with machines, whether in service contexts, medical assistance, or home health companions, these robots are first encountered in relationship to us and our respective roles in the encounter (e.g., surgeon, physical or occupational therapist, recipient of care, patient’s family, healthcare professional, stakeholder). This proposal aims to outline a framework for establishing limiting factors and determining the extent of moral or legal protections for robots. In doing so, it advocates for a relational approach that emphasizes the priority of mapping the complex contextually sensitive roles played and the relations in which humans and robots stand to guide policy determinations by relevant institutions and authorities. The relational approach must also be technically informed by the intended uses of the biomedical technologies in question, Design History Files, extensive risk assessments and hazard analyses, as well as use case social impact assessments.Keywords: biomedical robots, robot ethics, robot laws, human-robot interaction
Procedia PDF Downloads 1182006 Effects of Two Cross Focused Intense Laser Beams On THz Generation in Rippled Plasma
Authors: Sandeep Kumar, Naveen Gupta
Abstract:
Terahertz (THz) generation has been investigated by beating two cosh-Gaussian laser beams of the same amplitude but different wavenumbers and frequencies through rippled collisionless plasma. The ponderomotive force is operative which is induced due to the intensity gradient of the laser beam over the cross-section area of the wavefront. The electrons evacuate towards a low-intensity regime, which modifies the dielectric function of the medium and results in cross focusing of cosh-Gaussian laser beams. The evolution of spot size of laser beams has been studied by solving nonlinear Schrodinger wave equation (NLSE) with variational technique. The laser beams impart oscillations to electrons which are enhanced with ripple density. The nonlinear oscillatory motion of electrons gives rise to a nonlinear current density driving THz radiation. It has been observed that the periodicity of the ripple density helps to enhance the THz radiation.Keywords: rippled collisionless plasma, cosh-gaussian laser beam, ponderomotive force, variational technique, nonlinear current density
Procedia PDF Downloads 1992005 Mapping Consumer Role: A Systematic Review of Circular Economy Strategies
Authors: Kiana Keshavarz, Carmen Jaca, María J. Álvarez
Abstract:
The shift to a circular economy necessitates a substantial change in consumer behavior, a complex and unpredictable actor that proves challenging to guide toward sustainability. This systematic literature review addresses the pivotal role that consumers play in propelling a circular economy, emphasizing the critical gap between positive attitudes and responsible actions. In this review, we utilized two prominent databases, Scopus and Web of Science, during the months of July and August 2023. A comprehensive screening process considered 467 articles, ultimately including 115 in the study for detailed analysis. Recognizing the transformative potential of consumer behavior, the study examines three key phases of consumer interaction with products —pre-purchasing decision, careful usage, and post-use management—identifying consumer-centric strategies that boost sustainability in each phase. Contrary to the prevailing emphasis on post-management strategies in society, the synthesis highlights the profound impact of strategies enacted during the pre-purchasing decision phase. In the investigation of the persistent attitude-behavior gap, factors influencing this gap and impeding consumers from engaging in sustainable actions are identified based on behavioral theories. Subsequently, strategies aimed at diminishing barriers and boosting motivators, as outlined in the literature, are presented. Recognizing the transformative potential of consumer behavior, the study underscores the pivotal roles of policymakers, businesses, and governments in fostering a more sustainable future. Ultimately, there is a call for further research to enhance the depth of analysis. This could be achieved through a more focused approach, such as narrowing the scope to a specific industry or applying a specific behavioral theory.Keywords: circular economy, consumer behavior, sustainability, attitude-behavior gap, systematic literature review
Procedia PDF Downloads 782004 A Comprehensive Study and Evaluation on Image Fashion Features Extraction
Authors: Yuanchao Sang, Zhihao Gong, Longsheng Chen, Long Chen
Abstract:
Clothing fashion represents a human’s aesthetic appreciation towards everyday outfits and appetite for fashion, and it reflects the development of status in society, humanity, and economics. However, modelling fashion by machine is extremely challenging because fashion is too abstract to be efficiently described by machines. Even human beings can hardly reach a consensus about fashion. In this paper, we are dedicated to answering a fundamental fashion-related problem: what image feature best describes clothing fashion? To address this issue, we have designed and evaluated various image features, ranging from traditional low-level hand-crafted features to mid-level style awareness features to various current popular deep neural network-based features, which have shown state-of-the-art performance in various vision tasks. In summary, we tested the following 9 feature representations: color, texture, shape, style, convolutional neural networks (CNNs), CNNs with distance metric learning (CNNs&DML), AutoEncoder, CNNs with multiple layer combination (CNNs&MLC) and CNNs with dynamic feature clustering (CNNs&DFC). Finally, we validated the performance of these features on two publicly available datasets. Quantitative and qualitative experimental results on both intra-domain and inter-domain fashion clothing image retrieval showed that deep learning based feature representations far outweigh traditional hand-crafted feature representation. Additionally, among all deep learning based methods, CNNs with explicit feature clustering performs best, which shows feature clustering is essential for discriminative fashion feature representation.Keywords: convolutional neural network, feature representation, image processing, machine modelling
Procedia PDF Downloads 1382003 Computing Machinery and Legal Intelligence: Towards a Reflexive Model for Computer Automated Decision Support in Public Administration
Authors: Jacob Livingston Slosser, Naja Holten Moller, Thomas Troels Hildebrandt, Henrik Palmer Olsen
Abstract:
In this paper, we propose a model for human-AI interaction in public administration that involves legal decision-making. Inspired by Alan Turing’s test for machine intelligence, we propose a way of institutionalizing a continuous working relationship between man and machine that aims at ensuring both good legal quality and higher efficiency in decision-making processes in public administration. We also suggest that our model enhances the legitimacy of using AI in public legal decision-making. We suggest that case loads in public administration could be divided between a manual and an automated decision track. The automated decision track will be an algorithmic recommender system trained on former cases. To avoid unwanted feedback loops and biases, part of the case load will be dealt with by both a human case worker and the automated recommender system. In those cases an experienced human case worker will have the role of an evaluator, choosing between the two decisions. This model will ensure that the algorithmic recommender system is not compromising the quality of the legal decision making in the institution. It also enhances the legitimacy of using algorithmic decision support because it provides justification for its use by being seen as superior to human decisions when the algorithmic recommendations are preferred by experienced case workers. The paper outlines in some detail the process through which such a model could be implemented. It also addresses the important issue that legal decision making is subject to legislative and judicial changes and that legal interpretation is context sensitive. Both of these issues requires continuous supervision and adjustments to algorithmic recommender systems when used for legal decision making purposes.Keywords: administrative law, algorithmic decision-making, decision support, public law
Procedia PDF Downloads 2152002 Characterizing Nanoparticles Generated from the Different Working Type and the Stack Flue during 3D Printing Process
Authors: Kai-Jui Kou, Tzu-Ling Shen, Ying-Fang Wang
Abstract:
The objectives of the present study are to characterize nanoparticles generated from the different working type in 3D printing room and the stack flue during 3D printing process. The studied laboratory (10.5 m× 7.2 m × 3.2 m) with a ventilation rate of 500 m³/H is installed a 3D metal printing machine. Direct-reading instrument of a scanning mobility particle sizer (SMPS, Model 3082, TSI Inc., St. Paul, MN, USA) was used to conduct static sampling for nanoparticle number concentration and particle size distribution measurements. The SMPS obtained particle number concentration at every 3 minutes, the diameter of the SMPS ranged from 11~372 nm when the aerosol and sheath flow rates were set at 0.6 and 6 L/min, respectively. The concentrations of background, printing process, clearing operation, and screening operation were performed in the laboratory. On the other hand, we also conducted nanoparticle measurement on the 3D printing machine's stack flue to understand its emission characteristics. Results show that the nanoparticles emitted from the different operation process were the same distribution in the form of the uni-modal with number median diameter (NMD) as approximately 28.3 nm to 29.6 nm. The number concentrations of nanoparticles were 2.55×10³ count/cm³ in laboratory background, 2.19×10³ count/cm³ during printing process, 2.29×10³ count/cm³ during clearing process, 3.05×10³ count/cm³ during screening process, 2.69×10³ count/cm³ in laboratory background after printing process, and 6.75×10³ outside laboratory, respectively. We found that there are no emission nanoparticles during the printing process. However, the number concentration of stack flue nanoparticles in the ongoing print is 1.13×10⁶ count/cm³, and that of the non-printing is 1.63×10⁴ count/cm³, with a NMD of 458 nm and 29.4 nm, respectively. It can be confirmed that the measured particle size belongs to easily penetrate the filter in theory during the printing process, even though the 3D printer has a high-efficiency filtration device. Therefore, it is recommended that the stack flue of the 3D printer would be equipped with an appropriate dust collection device to prevent the operators from exposing these hazardous particles.Keywords: nanoparticle, particle emission, 3D printing, number concentration
Procedia PDF Downloads 1812001 Automated Facial Symmetry Assessment for Orthognathic Surgery: Utilizing 3D Contour Mapping and Hyperdimensional Computing-Based Machine Learning
Authors: Wen-Chung Chiang, Lun-Jou Lo, Hsiu-Hsia Lin
Abstract:
This study aimed to improve the evaluation of facial symmetry, which is crucial for planning and assessing outcomes in orthognathic surgery (OGS). Facial symmetry plays a key role in both aesthetic and functional aspects of OGS, making its accurate evaluation essential for optimal surgical results. To address the limitations of traditional methods, a different approach was developed, combining three-dimensional (3D) facial contour mapping with hyperdimensional (HD) computing to enhance precision and efficiency in symmetry assessments. The study was conducted at Chang Gung Memorial Hospital, where data were collected from 2018 to 2023 using 3D cone beam computed tomography (CBCT), a highly detailed imaging technique. A large and comprehensive dataset was compiled, consisting of 150 normal individuals and 2,800 patients, totaling 5,750 preoperative and postoperative facial images. These data were critical for training a machine learning model designed to analyze and quantify facial symmetry. The machine learning model was trained to process 3D contour data from the CBCT images, with HD computing employed to power the facial symmetry quantification system. This combination of technologies allowed for an objective and detailed analysis of facial features, surpassing the accuracy and reliability of traditional symmetry assessments, which often rely on subjective visual evaluations by clinicians. In addition to developing the system, the researchers conducted a retrospective review of 3D CBCT data from 300 patients who had undergone OGS. The patients’ facial images were analyzed both before and after surgery to assess the clinical utility of the proposed system. The results showed that the facial symmetry algorithm achieved an overall accuracy of 82.5%, indicating its robustness in real-world clinical applications. Postoperative analysis revealed a significant improvement in facial symmetry, with an average score increase of 51%. The mean symmetry score rose from 2.53 preoperatively to 3.89 postoperatively, demonstrating the system's effectiveness in quantifying improvements after OGS. These results underscore the system's potential for providing valuable feedback to surgeons and aiding in the refinement of surgical techniques. The study also led to the development of a web-based system that automates facial symmetry assessment. This system integrates HD computing and 3D contour mapping into a user-friendly platform that allows for rapid and accurate evaluations. Clinicians can easily access this system to perform detailed symmetry assessments, making it a practical tool for clinical settings. Additionally, the system facilitates better communication between clinicians and patients by providing objective, easy-to-understand symmetry scores, which can help patients visualize the expected outcomes of their surgery. In conclusion, this study introduced a valuable and highly effective approach to facial symmetry evaluation in OGS, combining 3D contour mapping, HD computing, and machine learning. The resulting system achieved high accuracy and offers a streamlined, automated solution for clinical use. The development of the web-based platform further enhances its practicality, making it a valuable tool for improving surgical outcomes and patient satisfaction in orthognathic surgery.Keywords: facial symmetry, orthognathic surgery, facial contour mapping, hyperdimensional computing
Procedia PDF Downloads 202000 Using Equipment Telemetry Data for Condition-Based maintenance decisions
Authors: John Q. Todd
Abstract:
Given that modern equipment can provide comprehensive health, status, and error condition data via built-in sensors, maintenance organizations have a new and valuable source of insight to take advantage of. This presentation will expose what these data payloads might look like and how they can be filtered, visualized, calculated into metrics, used for machine learning, and generate alerts for further action.Keywords: condition based maintenance, equipment data, metrics, alerts
Procedia PDF Downloads 1851999 Neural Network Approach for Solving Integral Equations
Authors: Bhavini Pandya
Abstract:
This paper considers Hη: T2 → T2 the Perturbed Cerbelli-Giona map. That is a family of 2-dimensional nonlinear area-preserving transformations on the torus T2=[0,1]×[0,1]= ℝ2/ ℤ2. A single parameter η varies between 0 and 1, taking the transformation from a hyperbolic toral automorphism to the “Cerbelli-Giona” map, a system known to exhibit multifractal properties. Here we study the multifractal properties of the family of maps. We apply a box-counting method by defining a grid of boxes Bi(δ), where i is the index and δ is the size of the boxes, to quantify the distribution of stable and unstable manifolds of the map. When the parameter is in the range 0.51< η <0.58 and 0.68< η <1 the map is ergodic; i.e., the unstable and stable manifolds eventually cover the whole torus, although not in a uniform distribution. For accurate numerical results we require correspondingly accurate construction of the stable and unstable manifolds. Here we use the piecewise linearity of the map to achieve this, by computing the endpoints of line segments which define the global stable and unstable manifolds. This allows the generalized fractal dimension Dq, and spectrum of dimensions f(α), to be computed with accuracy. Finally, the intersection of the unstable and stable manifold of the map will be investigated, and compared with the distribution of periodic points of the system.Keywords: feed forward, gradient descent, neural network, integral equation
Procedia PDF Downloads 1871998 Selecting The Contractor using Multi Criteria Decision Making in National Gas Company of Lorestan Province of Iran
Authors: Fatemeh Jaferi, Moslem Parsa, Heshmatolah Shams Khorramabadi
Abstract:
In this modern fluctuating world, organizations need to outsource some parts of their activities (project) to providers in order to show a quick response to their changing requirements. In fact, a number of companies and institutes have contractors do their projects and have some specific criteria in contractor selection. Therefore, a set of scientific tools is needed to select the best contractors to execute the project according to appropriate criteria. Multi-criteria decision making (MCDM) has been employed in the present study as a powerful tool in ranking and selecting the appropriate contractor. In this study, devolving second-source (civil) project to contractors in the National Gas Company of Lorestan Province (Iran) has been found and therefore, 5 civil companies have been evaluated. Evaluation criteria include executive experience, qualification of technical staff, good experience and company's rate, technical interview, affordability, equipment and machinery. Criteria's weights are found through experts' opinions along with AHP and contractors ranked through TOPSIS and AHP. The order of ranking contractors based on MCDM methods differs by changing the formula in the study. In the next phase, the number of criteria and their weights has been sensitivity analysed through using AHP. Adding each criterion changed contractors' ranking. Similarly, changing weights resulted in a change in ranking. Adopting the stated strategy resulted in the facts that not only is an appropriate scientific method available to select the most qualified contractors to execute gas project, but also a great attention is paid to picking needed criteria for selecting contractors. Consequently, executing such project is undertaken by most qualified contractors resulted in optimum use of limited resource, accelerating the implementation of project, increasing quality and finally boosting organizational efficiency.Keywords: multi-criteria decision making, project, management, contractor selection, gas company
Procedia PDF Downloads 4021997 Analysis and Modeling of Stresses and Creeps Resulting from Soil Mechanics in Southern Plains of Kerman Province
Authors: Kourosh Nazarian
Abstract:
Many of the engineering materials, such as behavioral metals, have at least a certain level of linear behavior. It means that if the stresses are doubled, the deformations would be also doubled. In fact, these materials have linear elastic properties. Soils do not follow this law, for example, when compressed, soils become gradually tighter. On the surface of the ground, the sand can be easily deformed with a finger, but in high compressive stresses, they gain considerable hardness and strength. This is mainly due to the increase in the forces among the separate particles. Creeps also deform the soils under a constant load over time. Clay and peat soils have creep behavior. As a result of this phenomenon, structures constructed on such soils will continue their collapse over time. In this paper, the researchers analyzed and modeled the stresses and creeps in the southern plains of Kerman province in Iran through library-documentary, quantitative and software techniques, and field survey. The results of the modeling showed that these plains experienced severe stresses and had a collapse of about 26 cm in the last 15 years and also creep evidence was discovered in an area with a gradient of 3-6 degrees.Keywords: Stress, creep, faryab, surface runoff
Procedia PDF Downloads 1781996 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 1121995 Understanding Responses of the Bee Community to an Urbanizing Landscape in Bengaluru, South India
Authors: Chethana V. Casiker, Jagadishakumara B., Sunil G. M., Chaithra K., M. Soubadra Devy
Abstract:
A majority of the world’s food crops depends on insects for pollination, among which bees are the most dominant taxon. Bees pollinate vegetables, fruits and oilseeds which are rich in essential micronutrients. Besides being a prerequisite for a nutritionally secure diet, agrarian economies such as India depend heavily on pollination for good yield and quality of the product. As cities all over the world expand rapidly, large tracts of green spaces are being built up. This, along with high usage of agricultural chemicals has reduced floral diversity and shrunk bee habitats. Indeed, pollinator decline is being reported from various parts of the world. Further, the FAO has reported a huge increase in the area of land under cultivation of pollinator-dependent crops. In the light of increasing demand for pollination and disappearing natural habitats, it is critical to understand whether and how urban spaces can support pollinators. To this end, this study investigates the influence of landscape and local habitat quality on bee community dynamics. To capture the dynamics of expanding cityscapes, the study employs a space for time substitution, wherein a transect along the gradient of urbanization substitutes a timeframe of increasing urbanization. This will help understand how pollinators would respond to changes induced by increasing intensity of urbanization in the future. Bengaluru, one of the fastest growing cities of Southern India, is an excellent site to study impacts associated with urbanization. With sites moving away from the Bengaluru’s centre and towards its peripheries, this study captures the changes in bee species diversity and richness along a gradient of urbanization. Bees were sampled under different land use types as well as in different types of vegetation, including plantations, croplands, fallow land, parks, lake embankments, and private gardens. The relationship between bee community metrics and key drivers such as a percentage of built-up area, land use practices, and floral resources was examined. Additionally, data collected using questionnaire interviews were used to understand people’s perceptions towards and level of dependence on pollinators. Our results showed that urban areas are capable of supporting bees. In fact, a greater diversity of bees was recorded in urban sites compared to adjoining rural areas. This suggests that bees are able to seek out patchy resources and survive in small fragments of habitat. Bee abundance and species richness correlated positively with floral abundance and richness, indicating the role of vegetation in providing forage and nesting sites which are crucial to their survival. Bee numbers were seen to decrease with increase in built-up area demonstrating that impervious surfaces could act as deterrents. Findings from this study challenge the popular notion of cities being biodiversity-bare spaces. There is indeed scope for conserving bees in urban landscapes, provided that there are city-scale planning and local initiative. Bee conservation can go hand in hand with efforts such as urban gardening and terrace farming that could help cities urbanize sustainably.Keywords: bee, landscape ecology, urbanization, urban pollination
Procedia PDF Downloads 1661994 Design and Optimization of a Small Hydraulic Propeller Turbine
Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink
Abstract:
A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design
Procedia PDF Downloads 1481993 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 921992 Adaptive Kaman Filter for Fault Diagnosis of Linear Parameter-Varying Systems
Authors: Rajamani Doraiswami, Lahouari Cheded
Abstract:
Fault diagnosis of Linear Parameter-Varying (LPV) system using an adaptive Kalman filter is proposed. The LPV model is comprised of scheduling parameters, and the emulator parameters. The scheduling parameters are chosen such that they are capable of tracking variations in the system model as a result of changes in the operating regimes. The emulator parameters, on the other hand, simulate variations in the subsystems during the identification phase and have negligible effect during the operational phase. The nominal model and the influence vectors, which are the gradient of the feature vector respect to the emulator parameters, are identified off-line from a number of emulator parameter perturbed experiments. A Kalman filter is designed using the identified nominal model. As the system varies, the Kalman filter model is adapted using the scheduling variables. The residual is employed for fault diagnosis. The proposed scheme is successfully evaluated on simulated system as well as on a physical process control system.Keywords: identification, linear parameter-varying systems, least-squares estimation, fault diagnosis, Kalman filter, emulators
Procedia PDF Downloads 4981991 HPLC-UV Screening of Legal (Caffeine and Yohimbine) and Illegal (Ephedrine and Sibutramine) Substances from Weight Loss Dietary Supplements for Athletes
Authors: Amelia Tero-Vescan, Camil-Eugen Vari, Laura Ciulea, Cristina Filip, Silvia Imre
Abstract:
A HPLC –UV method for the identification of ephedrine (EPH), sibutramine (SB), yohimbine (Y) and caffeine (CF) was developed. Separation was performed on a Kromasil 100-RP8, 150 mm x 4.6 mm, 5 mm column equipped with a precolumn Kromasil RP 8. Mobile phase was a gradient of 80-35 % sodium dihydrogen phosphate pH=5 with NH4OH and acetonitrile over 15 minutes time of analysis. Based on the responses of 113 athletes about dietary supplements (DS) consumed for "fat burning" and weight loss which have a legal status in Romania, 28 supplements have been selected and investigated for their content in CF, Y, legal substances, and SB, EPH (prohibited substances in DS). The method allows quantitative determination of the four substances in a short analysis time and with minimum cost. The presence of SB and EPH in the analyzed DS was not detected while the content in CF and Y considering the dosage recommended by the manufacturer does not affect the health of the consumers. DS labeling (plant extracts with CF and Y content) allows manufacturers to avoid declaring correct and exact amounts per pharmaceutical form (pure CF or equivalent and Y, respectively).Keywords: dietary supplements, sibutramine, ephedrine, yohimbine, caffeine, HPLC
Procedia PDF Downloads 4401990 Simultaneous Determination of p-Phenylenediamine, N-Acetyl-p-phenylenediamine and N,N-Diacetyl-p-phenylenediamine in Human Urine by LC-MS/MS
Authors: Khaled M. Mohamed
Abstract:
Background: P-Phenylenediamine (PPD) is used in the manufacture of hair dyes and skin decoration. In some developing countries, suicidal, homicidal and accidental cases by PPD were recorded. In this work, a sensitive LC-MS/MS method for determination of PPD and its metabolites N-acetyl-p-phenylenediamine (MAPPD) and N,N-diacetyl-p-phenylenediamine (DAPPD) in human urine has been developed and validated. Methods: PPD, MAPPD and DAPPD were extracted from urine by methylene chloride at alkaline pH. Acetanilide was used as internal standard (IS). The analytes and IS were separated on an Eclipse XDB- C18 column (150 X 4.6 mm, 5 µm) using a mobile phase of acetonitrile-1% formic acid in gradient elution. Detection was performed by LC-MS/MS using electrospray positive ionization under multiple reaction-monitoring mode. The transition ions m/z 109 → 92, m/z 151 → 92, m/z 193 → 92, and m/z 136 → 77 were selected for the quantification of PPD, MAPPD, DAPPD, and IS, respectively. Results: Calibration curves were linear in the range 10–2000 ng/mL for all analytes. The mean recoveries for PPD, MAPPD and DAPPD were 57.62, 74.19 and 50.99%, respectively. Intra-assay and inter-assay imprecisions were within 1.58–9.52% and 5.43–9.45% respectively for PPD, MAPPD and DAPPD. Inter-assay accuracies were within -7.43 and 7.36 for all compounds. PPD, MAPPD and DAPPD were stable in urine at –20 degrees for 24 hours. Conclusions: The method was successfully applied to the analysis of PPD, MAPPD and DAPPD in urine samples collected from suicidal cases.Keywords: p-Phenylenediamine, metabolites, urine, LC-MS/MS, validation
Procedia PDF Downloads 3541989 Boosting the Agrophysiological Performance of Chickpea Crop (Cicer Arietinum L.) Under Low-P Soil Conditions with the Co-application of Bacterial Consortium (Phosphate Solubilizing Bacteria and Rhizobium) and P-Fertilizers (RP and TSP Forms)
Authors: Rym Saidi, Pape Alioune Ndiaye, Ibnyasser Ammar, Zineb Rchiad, Khalid Daoui, Issam Kadmiri Meftahi, Adnane Bargaz
Abstract:
Chickpea (Cicer arietinum L.) is an important leguminous crop grown worldwide and plays a significant role in humans’ dietary consumption. Alongside nitrogen (N), low phosphorus (P) availability within agricultural soils is one of the major factors limiting chickpea growth and productivity. The combined application of beneficial bacterial inoculants and Rock P-fertilizer could boost chickpea performance and productivity, increasing P-utilization efficiency and minimizing nutrient losses under P-deficiency conditions. A greenhouse experiment was conducted to evaluate the response of chickpeas to two P-fertilizer forms (RP and TSP) under N2-fixer and P-solubilizer consortium inoculation to improve biological N fixation and P nutrition under P-deficient conditions. Under inoculation, chickpea chlorophyll content and chlorophyll fluorescence (RP+I and TSP+I) were increased compared to uninoculated treatments. The RP+I treatment increased both shoot and root dry weights by 48,80% and 72,68%, respectively, compared to the uninoculated RP fertilized control. Indeed, the bacterial consortium contributed to enhancing root morphological traits (e.g., root volume, surface area, and diameter) of all inoculated treatments versus the uninoculated treatments. Furthermore, soil available P and root inorganic P were significantly improved in RP+I by 162,84% and 73,24%, respectively, compared to uninoculated RP control. Our research outcomes suggest that the co-inoculation of chickpeas with N2-fixing, and P-solubilizing bacteria improves biomass yield and nutrient uptake. Eventually, enhancing chickpea agrophysiological performance, especially in restricted P-availability conditions.Keywords: chickpea, consortium, beneficial bacterial inoculants, phosphorus deficiency, rock p-fertilizer, nutrient uptake
Procedia PDF Downloads 641988 A Laser Instrument Rapid-E+ for Real-Time Measurements of Airborne Bioaerosols Such as Bacteria, Fungi, and Pollen
Authors: Minghui Zhang, Sirine Fkaier, Sabri Fernana, Svetlana Kiseleva, Denis Kiselev
Abstract:
The real-time identification of bacteria and fungi is difficult because they emit much weaker signals than pollen. In 2020, Plair developed Rapid-E+, which extends abilities of Rapid-E to detect smaller bioaerosols such as bacteria and fungal spores with diameters down to 0.3 µm, while keeping the similar or even better capability for measurements of large bioaerosols like pollen. Rapid-E+ enables simultaneous measurements of (1) time-resolved, polarization and angle dependent Mie scattering patterns, (2) fluorescence spectra resolved in 16 channels, and (3) fluorescence lifetime of individual particles. Moreover, (4) it provides 2D Mie scattering images which give the full information on particle morphology. The parameters of every single bioaerosol aspired into the instrument are subsequently analysed by machine learning. Firstly, pure species of microbes, e.g., Bacillus subtilis (a species of bacteria), and Penicillium chrysogenum (a species of fungal spores), were aerosolized in a bioaerosol chamber for Rapid-E+ training. Afterwards, we tested microbes under different concentrations. We used several steps of data analysis to classify and identify microbes. All single particles were analysed by the parameters of light scattering and fluorescence in the following steps. (1) They were treated with a smart filter block to get rid of non-microbes. (2) By classification algorithm, we verified the filtered particles were microbes based on the calibration data. (3) The probability threshold (defined by the user) step provides the probability of being microbes ranging from 0 to 100%. We demonstrate how Rapid-E+ identified simultaneously microbes based on the results of Bacillus subtilis (bacteria) and Penicillium chrysogenum (fungal spores). By using machine learning, Rapid-E+ achieved identification precision of 99% against the background. The further classification suggests the precision of 87% and 89% for Bacillus subtilis and Penicillium chrysogenum, respectively. The developed algorithm was subsequently used to evaluate the performance of microbe classification and quantification in real-time. The bacteria and fungi were aerosolized again in the chamber with different concentrations. Rapid-E+ can classify different types of microbes and then quantify them in real-time. Rapid-E+ enables classifying different types of microbes and quantifying them in real-time. Rapid-E+ can identify pollen down to species with similar or even better performance than the previous version (Rapid-E). Therefore, Rapid-E+ is an all-in-one instrument which classifies and quantifies not only pollen, but also bacteria and fungi. Based on the machine learning platform, the user can further develop proprietary algorithms for specific microbes (e.g., virus aerosols) and other aerosols (e.g., combustion-related particles that contain polycyclic aromatic hydrocarbons).Keywords: bioaerosols, laser-induced fluorescence, Mie-scattering, microorganisms
Procedia PDF Downloads 871987 Soil Sensibility Characterization of Granular Soils Due to Suffusion
Authors: Abdul Rochim, Didier Marot, Luc Sibille
Abstract:
This paper studies the characterization of soil sensibility due to suffusion process by carrying out a series of one-dimensional downward seepage flow tests realized with an erodimeter. Tests were performed under controlled hydraulic gradient in sandy gravel soils. We propose the analysis based on energy induced by the seepage flow to characterize the hydraulic loading and the cumulative eroded dry mass to characterize the soil response. With this approach, the effect of hydraulic loading histories and initial fines contents to soil sensibility are presented. It is found that for given soils, erosion coefficients are different if tests are performed under different hydraulic loading histories. For given initial fines fraction contents, the sensibility may be grouped in the same classification. The lower fines content soils tend to require larger flow energy to the onset of erosion. These results demonstrate that this approach is effective to characterize suffusion sensibility for granular soils.Keywords: erodimeter, sandy gravel, suffusion, water seepage energy
Procedia PDF Downloads 446