Search results for: quantification accuracy
842 Scalable and Accurate Detection of Pathogens from Whole-Genome Shotgun Sequencing
Authors: Janos Juhasz, Sandor Pongor, Balazs Ligeti
Abstract:
Next-generation sequencing, especially whole genome shotgun sequencing, is becoming a common approach to gain insight into the microbiomes in a culture-independent way, even in clinical practice. It does not only give us information about the species composition of an environmental sample but opens the possibility to detect antimicrobial resistance and novel, or currently unknown, pathogens. Accurately and reliably detecting the microbial strains is a challenging task. Here we present a sensitive approach for detecting pathogens in metagenomics samples with special regard to detecting novel variants of known pathogens. We have developed a pipeline that uses fast, short read aligner programs (i.e., Bowtie2/BWA) and comprehensive nucleotide databases. Taxonomic binning is based on the lowest common ancestor (LCA) principle; each read is assigned to a taxon, covering the most significantly hit taxa. This approach helps in balancing between sensitivity and running time. The program was tested both on experimental and synthetic data. The results implicate that our method performs as good as the state-of-the-art BLAST-based ones, furthermore, in some cases, it even proves to be better, while running two orders magnitude faster. It is sensitive and capable of identifying taxa being present only in small abundance. Moreover, it needs two orders of magnitude less reads to complete the identification than MetaPhLan2 does. We analyzed an experimental anthrax dataset (B. anthracis strain BA104). The majority of the reads (96.50%) was classified as Bacillus anthracis, a small portion, 1.2%, was classified as other species from the Bacillus genus. We demonstrate that the evaluation of high-throughput sequencing data is feasible in a reasonable time with good classification accuracy.Keywords: metagenomics, taxonomy binning, pathogens, microbiome, B. anthracis
Procedia PDF Downloads 137841 Seismic Vulnerability Analysis of Arch Dam Based on Response Surface Method
Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong
Abstract:
Earthquake is one of the main loads threatening dam safety. Once the dam is damaged, it will bring huge losses of life and property to the country and people. Therefore, it is very important to research the seismic safety of the dam. Due to the complex foundation conditions, high fortification intensity, and high scientific and technological content, it is necessary to adopt reasonable methods to evaluate the seismic safety performance of concrete arch dams built and under construction in strong earthquake areas. Structural seismic vulnerability analysis can predict the probability of structural failure at all levels under different intensity earthquakes, which can provide a scientific basis for reasonable seismic safety evaluation and decision-making. In this paper, the response surface method (RSM) is applied to the seismic vulnerability analysis of arch dams, which improves the efficiency of vulnerability analysis. Based on the central composite test design method, the material-seismic intensity samples are established. The response surface model (RSM) with arch crown displacement as performance index is obtained by finite element (FE) calculation of the samples, and then the accuracy of the response surface model (RSM) is verified. To obtain the seismic vulnerability curves, the seismic intensity measure ??(?1) is chosen to be 0.1~1.2g, with an interval of 0.1g and a total of 12 intensity levels. For each seismic intensity level, the arch crown displacement corresponding to 100 sets of different material samples can be calculated by algebraic operation of the response surface model (RSM), which avoids 1200 times of nonlinear dynamic calculation of arch dam; thus, the efficiency of vulnerability analysis is improved greatly.Keywords: high concrete arch dam, performance index, response surface method, seismic vulnerability analysis, vector-valued intensity measure
Procedia PDF Downloads 240840 Optimal Seismic Design of Reinforced Concrete Shear Wall-Frame Structure
Authors: H. Nikzad, S. Yoshitomi
Abstract:
In this paper, the optimal seismic design of reinforced concrete shear wall-frame building structures was done using structural optimization. The optimal section sizes were generated through structural optimization based on linear static analysis conforming to American Concrete Institute building design code (ACI 318-14). An analytical procedure was followed to validate the accuracy of the proposed method by comparing stresses on structural members through output files of MATLAB and ETABS. In order to consider the difference of stresses in structural elements by ETABS and MATLAB, and to avoid over-stress members by ETABS, a stress constraint ratio of MATLAB to ETABS was modified and introduced for the most critical load combinations and structural members. Moreover, seismic design of the structure was done following the International Building Code (IBC 2012), American Concrete Institute Building Code (ACI 318-14) and American Society of Civil Engineering (ASCE 7-10) standards. Typical reinforcement requirements for the structural wall, beam and column were discussed and presented using ETABS structural analysis software. The placement and detailing of reinforcement of structural members were also explained and discussed. The outcomes of this study show that the modification of section sizes play a vital role in finding an optimal combination of practical section sizes. In contrast, the optimization problem with size constraints has a higher cost than that of without size constraints. Moreover, the comparison of optimization problem with that of ETABS program shown to be satisfactory and governed ACI 318-14 building design code criteria.Keywords: structural optimization, seismic design, linear static analysis, etabs, matlab, rc shear wall-frame structures
Procedia PDF Downloads 173839 Enhancing Wire Electric Discharge Machining Efficiency through ANOVA-Based Process Optimization
Authors: Rahul R. Gurpude, Pallvita Yadav, Amrut Mulay
Abstract:
In recent years, there has been a growing focus on advanced manufacturing processes, and one such emerging process is wire electric discharge machining (WEDM). WEDM is a precision machining process specifically designed for cutting electrically conductive materials with exceptional accuracy. It achieves material removal from the workpiece metal through spark erosion facilitated by electricity. Initially developed as a method for precision machining of hard materials, WEDM has witnessed significant advancements in recent times, with numerous studies and techniques based on electrical discharge phenomena being proposed. These research efforts and methods in the field of ED encompass a wide range of applications, including mirror-like finish machining, surface modification of mold dies, machining of insulating materials, and manufacturing of micro products. WEDM has particularly found extensive usage in the high-precision machining of complex workpieces that possess varying hardness and intricate shapes. During the cutting process, a wire with a diameter ranging from 0.18mm is employed. The evaluation of EDM performance typically revolves around two critical factors: material removal rate (MRR) and surface roughness (SR). To comprehensively assess the impact of machining parameters on the quality characteristics of EDM, an Analysis of Variance (ANOVA) was conducted. This statistical analysis aimed to determine the significance of various machining parameters and their relative contributions in controlling the response of the EDM process. By undertaking this analysis, optimal levels of machining parameters were identified to achieve desirable material removal rates and surface roughness.Keywords: WEDM, MRR, optimization, surface roughness
Procedia PDF Downloads 75838 Research on the Renewal and Utilization of Space under the Bridge in Chongqing Based on Spatial Potential Evaluation
Authors: Xvelian Qin
Abstract:
Urban "organic renewal" based on the development of existing resources in high-density urban areas has become the mainstream of urban development in the new era. As an important stock resource of public space in high-density urban areas, promoting its value remodeling is an effective way to alleviate the shortage of public space resources. However, due to the lack of evaluation links in the process of underpass space renewal, a large number of underpass space resources have been left idle, facing the problems of low space conversion efficiency, lack of accuracy in development decision-making, and low adaptability of functional positioning to citizens' needs. Therefore, it is of great practical significance to construct the evaluation system of under-bridge space renewal potential and explore the renewal mode. In this paper, some of the under-bridge spaces in the main urban area of Chongqing are selected as the research object. Through the questionnaire interviews with the users of the built excellent space under the bridge, three types of six levels and twenty-two potential evaluation indexes of "objective demand factor, construction feasibility factor and construction suitability factor" are selected, including six levels of land resources, infrastructure, accessibility, safety, space quality and ecological environment. The analytical hierarchy process and expert scoring method are used to determine the index weight, construct the potential evaluation system of the space under the bridge in high-density urban areas of Chongqing, and explore the direction of renewal and utilization of its suitability. To provide feasible theoretical basis and scientific decision support for the use of under bridge space in the future.Keywords: high density urban area, potential evaluation, space under bridge, updated using
Procedia PDF Downloads 70837 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 86836 An Investigation into Slow ESL Reading Speed in Pakistani Students
Authors: Hina Javed
Abstract:
This study investigated the different strategies used by Pakistani students learning English as a second language at secondary level school. The basic premise of the study is that ESL students face tremendous difficulty while they are reading a text in English. It also purports to dig into the different causes of their slow reading. They might range from word reading accuracy, mental translation, lexical density, cultural gaps, complex syntactic constructions, and back skipping. Sixty Grade 7 students from two secondary mainstream schools in Lahore were selected for the study, thirty being boys and thirty girls. They were administered reading-related and reading speed pre and post-tests. The purpose of the tests was to gauge their performance on different reading tasks so as to be able to see how they used strategies, if any, and also to ascertain the causes hampering their performance on those tests. In the pretests, they were given simple texts with considerable lexical density and moderately complex sentential layout. In the post-tests, the reading tasks contained comic strips, texts with visuals, texts with controlled vocabulary, and an evenly distributed varied range of simple, compound, and complex sentences. Both the tests were timed. The results gleaned through the data gathered corroborated the researchers’ basic hunch that they performed significantly better than pretests. The findings suggest that the morphological structure of words and lexical density are the main sources of reading comprehension difficulties in poor ESL readers. It is also confirmed that if the texts are accompanied by pictorial visuals, it greatly facilitates students’ reading speed and comprehension. There is no substantial evidence that ESL readers adopt any specific strategy while reading in English.Keywords: slow ESL reading speed, mental translation, complex syntactic constructions, back skipping
Procedia PDF Downloads 72835 Biomechanics of Ceramic on Ceramic vs. Ceramic on Xlpe Total Hip Arthroplasties During Gait
Authors: Athanasios Triantafyllou, Georgios Papagiannis, Vassilios Nikolaou, Panayiotis J. Papagelopoulos, George C. Babis
Abstract:
In vitro measurements are widely used in order to predict THAs wear rate implementing gait kinematic and kinetic parameters. Clinical tests of materials and designs are crucial to prove the accuracy and validate such measurements. The purpose of this study is to examine the affection of THA gait kinematics and kinetics on wear during gait, the essential functional activity of humans, by comparing in vivo gait data to in vitro results. Our study hypothesis is that both implants will present the same hip joint kinematics and kinetics during gait. 127 unilateral primary cementless total hip arthroplasties were included in the research. Independent t-tests were used to identify a statistically significant difference in kinetic and kinematic data extracted from 3D gait analysis. No statistically significant differences observed at mean peak abduction, flexion and extension moments between the two groups (P.abduction= 0,125, P.flexion= 0,218, P.extension= 0,082). The kinematic measurements show no statistically significant differences too (Prom flexion-extension= 0,687, Prom abduction-adduction= 0,679). THA kinematics and kinetics during gait are important biomechanical parameters directly associated with implants wear. In vitro studies report less wear in CoC than CoXLPE when tested with the same gait cycle kinematic protocol. Our findings confirm that both implants behave identically in terms of kinematics in the clinical environment, thus strengthening in vitro results of CoC advantage. Correlated to all other significant factors that affect THA wear could address in a complete prism the wear on CoC and CoXLPE.Keywords: total hip arthroplasty biomechanics, THA gait analysis, ceramic on ceramic kinematics, ceramic on XLPE kinetics, total hip replacement wear
Procedia PDF Downloads 155834 Design of a Fuzzy Expert System for the Impact of Diabetes Mellitus on Cardiac and Renal Impediments
Authors: E. Rama Devi Jothilingam
Abstract:
Diabetes mellitus is now one of the most common non communicable diseases globally. India leads the world with largest number of diabetic subjects earning the title "diabetes capital of the world". In order to reduce the mortality rate, a fuzzy expert system is designed to predict the severity of cardiac and renal problems of diabetic patients using fuzzy logic. Since uncertainty is inherent in medicine, fuzzy logic is used in this research work to remove the inherent fuzziness of linguistic concepts and uncertain status in diabetes mellitus which is the prime cause for the cardiac arrest and renal failure. In this work, the controllable risk factors "blood sugar, insulin, ketones, lipids, obesity, blood pressure and protein/creatinine ratio" are considered as input parameters and the "the stages of cardiac" (SOC)" and the stages of renal" (SORD) are considered as the output parameters. The triangular membership functions are used to model the input and output parameters. The rule base is constructed for the proposed expert system based on the knowledge from the medical experts. Mamdani inference engine is used to infer the information based on the rule base to take major decision in diagnosis. Mean of maximum is used to get a non fuzzy control action that best represent possibility distribution of an inferred fuzzy control action. The proposed system also classifies the patients with high risk and low risk using fuzzy c means clustering techniques so that the patients with high risk are treated immediately. The system is validated with Matlab and is used as a tracking system with accuracy and robustness.Keywords: Diabetes mellitus, fuzzy expert system, Mamdani, MATLAB
Procedia PDF Downloads 291833 Organizational Commitment and Job Satisfaction of Job Order Personnel in the Overseas Workers Welfare Administration Regional Welfare Office Caraga
Authors: Anne Jane M. Hallasgo
Abstract:
This study assessed the level of job satisfaction and organizational commitment among job order personnel at the Overseas Workers Welfare Administration (OWWA) Regional Welfare Office Caraga. The primary objective of the study was to determine a correlation between the employees’ level of organizational commitment, job satisfaction, and their work performance. A carefully selected sample of twenty-five job orders from the OWWA Regional Welfare Office Caraga participated in the study. These individuals were chosen to represent the organization’s job order workforce. For accuracy and dependability, various types of statistical methods and instruments were employed, including advanced statistical tests like the independent sample T-test, one-way analysis of variance (ANOVA), and Spearman's rank correlation coefficient, as well as descriptive statistics like mean, frequency, and percentage. The study found an acceptable level of job satisfaction regarding work performance. It revealed a significant relationship between affective commitment and job satisfaction concerning leadership and coworkers. A correlation was observed between normative commitment and work performance. The findings suggest that organizations emphasizing positive leadership, fostering supportive coworker relationships, aligning with employee values, and promoting a culture of commitment are likely to enhance both affective and normative commitment, thereby improving overall employee satisfaction. The study recommends designing and implementing a holistic employee well-being program that addresses physical, mental, and emotional health contributing to increased job satisfaction and organizational commitment, creating a healthier and engaged workforce. This research contributes to the understanding of the dynamics of organizational commitment and job satisfaction among job order employees in the public sector.Keywords: affective commitment, continuous commitment, normative commitment, job satisfaction
Procedia PDF Downloads 48832 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm
Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi
Abstract:
To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm
Procedia PDF Downloads 237831 Content Analysis of Video Translations: Examining the Linguistic and Thematic Approach by Translator Abdullah Khrief on the X Platform
Authors: Easa Almustanyir
Abstract:
This study investigates the linguistic and thematic approach of translator Abdullah Khrief in the context of video translations on the X platform. The sample comprises 15 videos from Khrief's account, covering diverse content categories like science, religion, social issues, personal experiences, lifestyle, and culture. The analysis focuses on two aspects: language usage and thematic representation. Regarding language, the study examines the prevalence of English while considering the inclusion of French and German content, highlighting Khrief's multilingual versatility and ability to navigate cultural nuances. Thematically, the study explores the diverse range of topics covered, encompassing scientific, religious, social, and personal narratives, underscoring Khrief's broad subject matter expertise and commitment to knowledge dissemination. The study employs a mixed-methods approach, combining quantitative data analysis with qualitative content analysis. Statistical data on video languages, presenter genders, and content categories are analyzed, and a thorough content analysis assesses translation accuracy, cultural appropriateness, and overall quality. Preliminary findings indicate a high level of professionalism and expertise in Khrief's translations. The absence of errors across the diverse range of videos establishes his credibility and trustworthiness. Furthermore, the accurate representation of cultural nuances and sensitive topics highlights Khrief's cultural sensitivity and commitment to preserving intended meanings and emotional resonance.Keywords: audiovisual translation, linguistic versatility, thematic diversity, cultural sensitivity, content analysis, mixed-methods approach
Procedia PDF Downloads 19830 Ethanol Chlorobenzene Dosimetr Usage for Measuring Dose of the Intraoperative Linear Electron Accelerator System
Authors: Mojtaba Barzegar, Alireza Shirazi, Saied Rabi Mahdavi
Abstract:
Intraoperative radiation therapy (IORT) is an innovative treatment modality that the delivery of a large single dose of radiation to the tumor bed during the surgery. The radiotherapy success depends on the absorbed dose delivered to the tumor. The achievement better accuracy in patient treatment depends upon the measured dose by standard dosimeter such as ionization chamber, but because of the high density of electric charge/pulse produced by the accelerator in the ionization chamber volume, the standard correction factor for ion recombination Ksat calculated with the classic two-voltage method is overestimated so the use of dose/pulse independent dosimeters such as chemical Fricke and ethanol chlorobenzene (ECB) dosimeters have been suggested. Dose measurement is usually calculated and calibrated in the Zmax. Ksat calculated by comparison of ion chamber response and ECB dosimeter at each applicator degree, size, and dose. The relative output factors for IORT applicators have been calculated and compared with experimentally determined values and the results simulated by Monte Carlo software. The absorbed doses have been calculated and measured with statistical uncertainties less than 0.7% and 2.5% consecutively. The relative differences between calculated and measured OF’s were up to 2.5%, for major OF’s the agreement was better. In these conditions, together with the relative absorbed dose calculations, the OF’s could be considered as an indication that the IORT electron beams have been well simulated. These investigations demonstrate the utility of the full Monte Carlo simulation of accelerator head with ECB dosimeter allow us to obtain detailed information of clinical IORT beams.Keywords: intra operative radiotherapy, ethanol chlorobenzene, ksat, output factor, monte carlo simulation
Procedia PDF Downloads 479829 Nation Branding as Reframing: From the Perspective of Translation Studies
Authors: Ye Tian
Abstract:
Soft power has replaced hard power and become one of the most attractive ways nations pursue to expand their international influence. One of the ways to improve a nation’s soft power is to commercialise the country and brand or rebrand it to the international audience, and thus attract interests or foreign investments. In this process, translation has often been regarded as merely a tool, and researches in it are either in translating literature as culture export or in how (in)accuracy of translation influences the branding campaign. This paper proposes to analyse nation branding campaign with framing theory, and thus gives an entry for translation studies to come to a central stage in today’s soft power research. To frame information or elements of a text, an event, or, as in this paper, a nation is to put them in a mental structure. This structure can be built by outsiders or by those who create the text, the event, or by citizens of the nation. To frame information like this can be regarded as a process of translation, as what translation does in its traditional meaning of ‘translating a text’ is to put a framework on the text to, deliberately or not, highlight some of the elements while hiding the others. In the discourse of nations, then, people unavoidably simplify a national image and put the nation into their imaginary framework. In this way, problems like stereotype and prejudice come into being. Meanwhile, if nations seek ways to frame or reframe themselves, they make efforts to have in control what and who they are in the eyes of international audiences, and thus make profits, economically or politically, from it. The paper takes African nations, which are usually perceived as a whole, and the United Kingdom as examples to justify passive and active framing process, and assesses both positive and negative influence framing has on nations. In conclusion, translation as framing causes problems like prejudice, and the image of a nation is not always in the hands of nation branders, but reframing the nation in a positive way has the potential to turn the tide.Keywords: framing, nation branding, stereotype, translation
Procedia PDF Downloads 155828 Optimization of Process Parameters for Copper Extraction from Wastewater Treatment Sludge by Sulfuric Acid
Authors: Usarat Thawornchaisit, Kamalasiri Juthaisong, Kasama Parsongjeen, Phonsiri Phoengchan
Abstract:
In this study, sludge samples that were collected from the wastewater treatment plant of a printed circuit board manufacturing industry in Thailand were subjected to acid extraction using sulfuric acid as the chemical extracting agent. The effects of sulfuric acid concentration (A), the ratio of a volume of acid to a quantity of sludge (B) and extraction time (C) on the efficiency of copper extraction were investigated with the aim of finding the optimal conditions for maximum removal of copper from the wastewater treatment sludge. Factorial experimental design was employed to model the copper extraction process. The results were analyzed statistically using analysis of variance to identify the process variables that were significantly affected the copper extraction efficiency. Results showed that all linear terms and an interaction term between volume of acid to quantity of sludge ratio and extraction time (BC), had statistically significant influence on the efficiency of copper extraction under tested conditions in which the most significant effect was ascribed to volume of acid to quantity of sludge ratio (B), followed by sulfuric acid concentration (A), extraction time (C) and interaction term of BC, respectively. The remaining two-way interaction terms, (AB, AC) and the three-way interaction term (ABC) is not statistically significant at the significance level of 0.05. The model equation was derived for the copper extraction process and the optimization of the process was performed using a multiple response method called desirability (D) function to optimize the extraction parameters by targeting maximum removal. The optimum extraction conditions of 99% of copper were found to be sulfuric acid concentration: 0.9 M, ratio of the volume of acid (mL) to the quantity of sludge (g) at 100:1 with an extraction time of 80 min. Experiments under the optimized conditions have been carried out to validate the accuracy of the Model.Keywords: acid treatment, chemical extraction, sludge, waste management
Procedia PDF Downloads 198827 Close-Range Remote Sensing Techniques for Analyzing Rock Discontinuity Properties
Authors: Sina Fatolahzadeh, Sergio A. Sepúlveda
Abstract:
This paper presents advanced developments in close-range, terrestrial remote sensing techniques to enhance the characterization of rock masses. The study integrates two state-of-the-art laser-scanning technologies, the HandySCAN and GeoSLAM laser scanners, to extract high-resolution geospatial data for rock mass analysis. These instruments offer high accuracy, precision, low acquisition time, and high efficiency in capturing intricate geological features in small to medium size outcrops and slope cuts. Using the HandySCAN and GeoSLAM laser scanners facilitates real-time, three-dimensional mapping of rock surfaces, enabling comprehensive assessments of rock mass characteristics. The collected data provide valuable insights into structural complexities, surface roughness, and discontinuity patterns, which are essential for geological and geotechnical analyses. The synergy of these advanced remote sensing technologies contributes to a more precise and straightforward understanding of rock mass behavior. In this case, the main parameters of RQD, joint spacing, persistence, aperture, roughness, infill, weathering, water condition, and joint orientation in a slope cut along the Sea-to-Sky Highway, BC, were remotely analyzed to calculate and evaluate the Rock Mass Rating (RMR) and Geological Strength Index (GSI) classification systems. Automatic and manual analyses of the acquired data are then compared with field measurements. The results show the usefulness of the proposed remote sensing methods and their appropriate conformity with the actual field data.Keywords: remote sensing, rock mechanics, rock engineering, slope stability, discontinuity properties
Procedia PDF Downloads 66826 Probiotic Antibacterial Test of Pediococcus pentosaceus Isolated from Dadih in Inhibiting Periodontitis Bacteria: In Vitro Study on Bacteria Aggregatibacter actinomycetemcomitans
Authors: Nurlaili Syafar Wulan, Almurdi, Suprianto Kosno
Abstract:
Introduction: Periodontitis defined as an inflammatory disease of teeth supporting tissue with irritation of specific pathogens as the main aetiology. Periodontitis can be cured by giving medical action accompanied by administration of an antibiotic, but the use of antibiotic has a side effect that can cause bacterial resistance. This side effect can be corrected by probiotic, which has antibiotic-like substance but do not have bacterial resistance effect; it makes probiotic became a promising future periodontitis medication. West Sumatran people has their own typical traditional food product made from fermented buffalo’s milk called dadih, and it contained probiotics. Objectives: The aim of this study was to determine the ability of probiotic Pediococcus pentosaceus isolated from dadih in inhibiting the growth of bacteria Aggregatibacter actinomycetemcomitans. Material and Method: This was a true experimental study with post-test and control group design. This study was conducted on 36 samples of 2 treatment groups, the test group with probiotic Pediococcus pentosacesus isolated from dadih and the negative control group with sterile aquadest. The antibacterial effect was tested using the Kirby-Bauer disk diffusion method and calculated by measuring the zone of inhibition on MHA around paper disk using a sliding caliper with 0.5 mm accuracy. Result: The result of bivariate analysis using Independent t-test was p=0.00 where p < 0.05 means that there is a significant difference between the tested group and negative control group. Conclusion: Probiotic Pediococcus pentosaceus isolated from dadih are able to inhibit the growth of Aggregatibacter actinomycetemcomitans.Keywords: aggregatibacter actinomycetemcomitans, antibacterial activities, periodontitis, probiotic Pediococcus pentosaceus
Procedia PDF Downloads 134825 CFD Study of Subcooled Boiling Flow at Elevated Pressure Using a Mechanistic Wall Heat Partitioning Model
Authors: Machimontorn Promtong, Sherman C. P. Cheung, Guan H. Yeoh, Sara Vahaji, Jiyuan Tu
Abstract:
The wide range of industrial applications involved with boiling flows promotes the necessity of establishing fundamental knowledge in boiling flow phenomena. For this purpose, a number of experimental and numerical researches have been performed to elucidate the underlying physics of this flow. In this paper, the improved wall boiling models, implemented on ANSYS CFX 14.5, were introduced to study subcooled boiling flow at elevated pressure. At the heated wall boundary, the Fractal model, Force balance approach and Mechanistic frequency model are given for predicting the nucleation site density, bubble departure diameter, and bubble departure frequency. The presented wall heat flux partitioning closures were modified to consider the influence of bubble sliding along the wall before the lift-off, which usually happens in the flow boiling. The simulation was performed based on the Two-fluid model, where the standard k-ω SST model was selected for turbulence modelling. Existing experimental data at around 5 bars were chosen to evaluate the accuracy of the presented mechanistic approach. The void fraction and Interfacial Area Concentration (IAC) are in good agreement with the experimental data. However, the predicted bubble velocity and Sauter Mean Diameter (SMD) are over-predicted. This over-prediction may be caused by consideration of only dispersed and spherical bubbles in the simulations. In the future work, the important physical mechanisms of bubbles, such as merging and shrinking during sliding on the heated wall will be incorporated into this mechanistic model to enhance its capability for a wider range of flow prediction.Keywords: subcooled boiling flow, computational fluid dynamics (CFD), mechanistic approach, two-fluid model
Procedia PDF Downloads 318824 The Omani Learner of English Corpus: Source and Tools
Authors: Anood Al-Shibli
Abstract:
Designing a learner corpus is not an easy task to accomplish because dealing with learners’ language has many variables which might affect the results of any study based on learners’ language production (spoken and written). Also, it is very essential to systematically design a learner corpus especially when it is aimed to be a reference to language research. Therefore, designing the Omani Learner Corpus (OLEC) has undergone many explicit and systematic considerations. These criteria can be regarded as the foundation to design any learner corpus to be exploited effectively in language use and language learning studies. Added to that, OLEC is manually error-annotated corpus. Error-annotation in learner corpora is very essential; however, it is time-consuming and prone to errors. Consequently, a navigating tool is designed to help the annotators to insert errors’ codes in order to make the error-annotation process more efficient and consistent. To assure accuracy, error annotation procedure is followed to annotate OLEC and some preliminary findings are noted. One of the main results of this procedure is creating an error-annotation system based on the Omani learners of English language production. Because OLEC is still in the first stages, the primary findings are related to only one level of proficiency and one error type which is verb related errors. It is found that Omani learners in OLEC has the tendency to have more errors in forming the verb and followed by problems in agreement of verb. Comparing the results to other error-based studies indicate that the Omani learners tend to have basic verb errors which can found in lower-level of proficiency. To this end, it is essential to admit that examining learners’ errors can give insights to language acquisition and language learning and most errors do not happen randomly but they occur systematically among language learners.Keywords: error-annotation system, error-annotation manual, learner corpora, verbs related errors
Procedia PDF Downloads 142823 Analysis of Grid Connected High Concentrated Photovoltaic Systems for Peak Load Shaving in Kuwait
Authors: Adel A. Ghoneim
Abstract:
Air conditioning devices are substantially utilized in the summer months, as a result maximum loads in Kuwait take place in these intervals. Peak energy consumption are usually more expensive to satisfy compared to other standard power sources. The primary objective of the current work is to enhance the performance of high concentrated photovoltaic (HCPV) systems in an attempt to minimize peak power usage in Kuwait using HCPV modules. High concentrated PV multi-junction solar cells provide a promising method towards accomplishing lowest pricing per kilowatt-hour. Nevertheless, these cells have various features that should be resolved to be feasible for extensive power production. A single diode equivalent circuit model is formulated to analyze multi-junction solar cells efficiency in Kuwait weather circumstances taking into account the effects of both the temperature and the concentration ratio. The diode shunt resistance that is commonly ignored in the established models is considered in the present numerical model. The current model results are successfully validated versus measurements from published data to within 1.8% accuracy. Present calculations reveal that the single diode model considering the shunt resistance provides accurate and dependable results. The electrical efficiency (η) is observed to increase with concentration to a specific concentration level after which it reduces. Implementing grid systems is noticed to increase with concentration to a certain concentration degree after which it decreases. Employing grid connected HCPV systems results in significant peak load reduction.Keywords: grid connected, high concentrated photovoltaic systems, peak load, solar cells
Procedia PDF Downloads 155822 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections
Authors: Anthony D. Rhodes, Manan Goel
Abstract:
We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.Keywords: computer vision, object segmentation, interactive segmentation, model compression
Procedia PDF Downloads 120821 Efficacy of Conservation Strategies for Endangered Garcinia gummi gutta under Climate Change in Western Ghats
Authors: Malay K. Pramanik
Abstract:
Climate change is continuously affecting the ecosystem, species distribution as well as global biodiversity. The assessment of the species potential distribution and the spatial changes under various climate change scenarios is a significant step towards the conservation and mitigation of habitat shifts, and species' loss and vulnerability. In this context, the present study aimed to predict the influence of current and future climate on an ecologically vulnerable medicinal species, Garcinia gummi-gutta, of the southern Western Ghats using Maximum Entropy (MaxEnt) modeling. The future projections were made for the period of 2050 and 2070 with RCP (Representative Concentration Pathways) scenario of 4.5 and 8.5 using 84 species occurrence data, and climatic variables from three different models of Intergovernmental Panel for Climate Change (IPCC) fifth assessment. Climatic variables contributions were assessed using jackknife test and AOC value 0.888 indicates the model perform with high accuracy. The major influencing variables will be annual precipitation, precipitation of coldest quarter, precipitation seasonality, and precipitation of driest quarter. The model result shows that the current high potential distribution of the species is around 1.90% of the study area, 7.78% is good potential; about 90.32% is moderate to very low potential for species suitability. Finally, the results of all model represented that there will be a drastic decline in the suitable habitat distribution by 2050 and 2070 for all the RCP scenarios. The study signifies that MaxEnt model might be an efficient tool for ecosystem management, biodiversity protection, and species re-habitation planning under climate change.Keywords: Garcinia gummi gutta, maximum entropy modeling, medicinal plants, climate change, western ghats, MaxEnt
Procedia PDF Downloads 392820 Comparison of FNTD and OSLD Detectors' Responses to Light Ion Beams Using Monte Carlo Simulations and Exprimental Data
Authors: M. R. Akbari, H. Yousefnia, A. Ghasemi
Abstract:
Al2O3:C,Mg fluorescent nuclear track detector (FNTD) and Al2O3:C optically stimulated luminescence detector (OSLD) are becoming two of the applied detectors in ion dosimetry. Therefore, the response of these detectors to hadron beams is highly of interest in radiation therapy (RT) using ion beams. In this study, these detectors' responses to proton and Helium-4 ion beams were compared using Monte Carlo simulations. The calculated data for proton beams were compared with Markus ionization chamber (IC) measurement (in water phantom) from M.D. Anderson proton therapy center. Monte Carlo simulations were performed via the FLUKA code (version 2011.2-17). The detectors were modeled in cylindrical shape at various depths of the water phantom without shading each other for obtaining relative depth dose in the phantom. Mono-energetic parallel ion beams in different incident energies (100 MeV/n to 250 MeV/n) were collided perpendicularly on the phantom surface. For proton beams, the results showed that the simulated detectors have over response relative to IC measurements in water phantom. In all cases, there were good agreements between simulated ion ranges in the water with calculated and experimental results reported by the literature. For proton, maximum peak to entrance dose ratio in the simulated water phantom was 4.3 compared with about 3 obtained from IC measurements. For He-4 ion beams, maximum peak to entrance ratio calculated by both detectors was less than 3.6 in all energies. Generally, it can be said that FLUKA is a good tool to calculate Al2O3:C,Mg FNTD and Al2O3:C OSLD detectors responses to therapeutic proton and He-4 ion beams. It can also calculate proton and He-4 ion ranges with a reasonable accuracy.Keywords: comparison, FNTD and OSLD detectors response, light ion beams, Monte Carlo simulations
Procedia PDF Downloads 343819 Fuzzy-Genetic Algorithm Multi-Objective Optimization Methodology for Cylindrical Stiffened Tanks Conceptual Design
Authors: H. Naseh, M. Mirshams, M. Mirdamadian, H. R. Fazeley
Abstract:
This paper presents an extension of fuzzy-genetic algorithm multi-objective optimization methodology that could effectively be used to find the overall satisfaction of objective functions (selecting the design variables) in the early stages of design process. The coupling of objective functions due to design variables in an engineering design process will result in difficulties in design optimization problems. In many cases, decision making on design variables conflicts with more than one discipline in system design. In space launch system conceptual design, decision making on some design variable (e.g. oxidizer to fuel mass flow rate O/F) in early stages of the design process is related to objective of liquid propellant engine (specific impulse) and Tanks (structure weight). Then, the primary application of this methodology is the design of a liquid propellant engine with the maximum specific impulse and cylindrical stiffened tank with the minimum weight. To this end, the design problem is established the fuzzy rule set based on designer's expert knowledge with a holistic approach. The independent design variables in this model are oxidizer to fuel mass flow rate, thickness of stringers, thickness of rings, shell thickness. To handle the mentioned problems, a fuzzy-genetic algorithm multi-objective optimization methodology is developed based on Pareto optimal set. Consequently, this methodology is modeled with the one stage of space launch system to illustrate accuracy and efficiency of proposed methodology.Keywords: cylindrical stiffened tanks, multi-objective, genetic algorithm, fuzzy approach
Procedia PDF Downloads 655818 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification
Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine
Abstract:
Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.Keywords: convolution, feature extraction, image analysis, validation, precision agriculture
Procedia PDF Downloads 316817 An Exploration of Renewal Utilization of Under-bridge Space Based on Spatial Potential Evaluation - Taking Chongqing Municipality as an Example
Authors: Xuelian Qin
Abstract:
Urban "organic renewal" based on the development of existing resources in high-density urban areas has become the mainstream of urban development in the new era. As an important stock resource of public space in high-density urban areas, promoting its value remodeling is an effective way to alleviate the shortage of public space resources. However, due to the lack of evaluation links in the process of underpass space renewal, a large number of underpass space resources have been left idle, facing the problems of low space conversion efficiency, lack of accuracy in development decision-making, and low adaptability of functional positioning to citizens' needs. Therefore, it is of great practical significance to construct the evaluation system of under-bridge space renewal potential and explore the renewal mode. In this paper, some of the under-bridge spaces in the main urban area of Chongqing are selected as the research object. Through the questionnaire interviews with the users of the built excellent space under the bridge, three types of six levels and twenty-two potential evaluation indexes of "objective demand factor, construction feasibility factor and construction suitability factor" are selected, including six levels of land resources, infrastructure, accessibility, safety, space quality and ecological environment. The analytical hierarchy process and expert scoring method are used to determine the index weight, construct the potential evaluation system of the space under the bridge in high-density urban areas of Chongqing, and explore the direction of renewal and utilization of its suitability. To provide feasible theoretical basis and scientific decision support for the use of under bridge space in the future.Keywords: high density urban area, potential evaluation, space under bridge, updated using
Procedia PDF Downloads 95816 Disparities Versus Similarities; WHO Good Practices for Pharmaceutical Quality Control Laboratories and ISO/IEC 17025:2017: International Standards for Quality Management Systems in Pharmaceutical Laboratories
Authors: Mercy Okezue, Kari Clase, Stephen Byrn, Paddy Shivanand
Abstract:
Medicines regulatory authorities expect pharmaceutical companies and contract research organizations to seek ways to certify that their laboratory control measurements are reliable. Establishing and maintaining laboratory quality standards are essential in ensuring the accuracy of test results. ‘ISO/IEC 17025:2017’ and ‘WHO Good Practices for Pharmaceutical Quality Control Laboratories (GPPQCL)’ are two quality standards commonly employed in developing laboratory quality systems. A review was conducted on the two standards to elaborate on areas on convergence and divergence. The goal was to understand how differences in each standard's requirements may influence laboratories' choices as to which document is easier to adopt for quality systems. A qualitative review method compared similar items in the two standards while mapping out areas where there were specific differences in the requirements of the two documents. The review also provided a detailed description of the clauses and parts covering management and technical requirements in these laboratory standards. The review showed that both documents share requirements for over ten critical areas covering objectives, infrastructure, management systems, and laboratory processes. There were, however, differences in standard expectations where GPPQCL emphasizes system procedures for planning and future budgets that will ensure continuity. Conversely, ISO 17025 was more focused on the risk management approach to establish laboratory quality systems. Elements in the two documents form common standard requirements to assure the validity of laboratory test results that promote mutual recognition. The ISO standard currently has more global patronage than GPPQCL.Keywords: ISO/IEC 17025:2017, laboratory standards, quality control, WHO GPPQCL
Procedia PDF Downloads 198815 Improving Pneumatic Artificial Muscle Performance Using Surrogate Model: Roles of Operating Pressure and Tube Diameter
Authors: Van-Thanh Ho, Jaiyoung Ryu
Abstract:
In soft robotics, the optimization of fluid dynamics through pneumatic methods plays a pivotal role in enhancing operational efficiency and reducing energy loss. This is particularly crucial when replacing conventional techniques such as cable-driven electromechanical systems. The pneumatic model employed in this study represents a sophisticated framework designed to efficiently channel pressure from a high-pressure reservoir to various muscle locations on the robot's body. This intricate network involves a branching system of tubes. The study introduces a comprehensive pneumatic model, encompassing the components of a reservoir, tubes, and Pneumatically Actuated Muscles (PAM). The development of this model is rooted in the principles of shock tube theory. Notably, the study leverages experimental data to enhance the understanding of the interplay between the PAM structure and the surrounding fluid. This improved interactive approach involves the use of morphing motion, guided by a contraction function. The study's findings demonstrate a high degree of accuracy in predicting pressure distribution within the PAM. The model's predictive capabilities ensure that the error in comparison to experimental data remains below a threshold of 10%. Additionally, the research employs a machine learning model, specifically a surrogate model based on the Kriging method, to assess and quantify uncertainty factors related to the initial reservoir pressure and tube diameter. This comprehensive approach enhances our understanding of pneumatic soft robotics and its potential for improved operational efficiency.Keywords: pneumatic artificial muscles, pressure drop, morhing motion, branched network, surrogate model
Procedia PDF Downloads 98814 Precipitation Intensity: Duration Based Threshold Analysis for Initiation of Landslides in Upper Alaknanda Valley
Authors: Soumiya Bhattacharjee, P. K. Champati Ray, Shovan L. Chattoraj, Mrinmoy Dhara
Abstract:
The entire Himalayan range is globally renowned for rainfall-induced landslides. The prime focus of the study is to determine rainfall based threshold for initiation of landslides that can be used as an important component of an early warning system for alerting stake holders. This research deals with temporal dimension of slope failures due to extreme rainfall events along the National Highway-58 from Karanprayag to Badrinath in the Garhwal Himalaya, India. Post processed 3-hourly rainfall intensity data and its corresponding duration from daily rainfall data available from Tropical Rainfall Measuring Mission (TRMM) were used as the prime source of rainfall data. Landslide event records from Border Road Organization (BRO) and some ancillary landslide inventory data for 2013 and 2014 have been used to determine Intensity Duration (ID) based rainfall threshold. The derived governing threshold equation, I= 4.738D-0.025, has been considered for prediction of landslides of the study region. This equation was validated with an accuracy of 70% landslides during August and September 2014. The derived equation was considered for further prediction of landslides of the study region. From the obtained results and validation, it can be inferred that this equation can be used for initiation of landslides in the study area to work as a part of an early warning system. Results can significantly improve with ground based rainfall estimates and better database on landslide records. Thus, the study has demonstrated a very low cost method to get first-hand information on possibility of impending landslide in any region, thereby providing alert and better preparedness for landslide disaster mitigation.Keywords: landslide, intensity-duration, rainfall threshold, TRMM, slope, inventory, early warning system
Procedia PDF Downloads 273813 Optimisation of Metrological Inspection of a Developmental Aeroengine Disc
Authors: Suneel Kumar, Nanda Kumar J. Sreelal Sreedhar, Suchibrata Sen, V. Muralidharan,
Abstract:
Fan technology is very critical and crucial for any aero engine technology. The fan disc forms a critical part of the fan module. It is an airworthiness requirement to have a metrological qualified quality disc. The current study uses a tactile probing and scanning on an articulated measuring machine (AMM), a bridge type coordinate measuring machine (CMM) and Metrology software for intermediate and final dimensional and geometrical verification during the prototype development of the disc manufactured through forging and machining process. The circumferential dovetails manufactured through the milling process are evaluated based on the evaluated and analysed metrological process. To perform metrological optimization a change of philosophy is needed making quality measurements available as fast as possible to improve process knowledge and accelerate the process but with accuracy, precise and traceable measurements. The offline CMM programming for inspection and optimisation of the CMM inspection plan are crucial portions of the study and discussed. The dimensional measurement plan as per the ASME B 89.7.2 standard to reach an optimised CMM measurement plan and strategy are an important requirement. The probing strategy, stylus configuration, and approximation strategy effects on the measurements of circumferential dovetail measurements of the developmental prototype disc are discussed. The results were discussed in the form of enhancement of the R &R (repeatability and reproducibility) values with uncertainty levels within the desired limits. The findings from the measurement strategy adopted for disc dovetail evaluation and inspection time optimisation are discussed with the help of various analyses and graphical outputs obtained from the verification process.Keywords: coordinate measuring machine, CMM, aero engine, articulated measuring machine, fan disc
Procedia PDF Downloads 107