Search results for: preposition error detection
513 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology
Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal
Abstract:
Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.Keywords: chloramine decay, modelling, response surface methodology, water quality parameters
Procedia PDF Downloads 229512 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy
Procedia PDF Downloads 127511 Efficiently Degradation of Perfluorooctanoic Acid, an Emerging Contaminant, by a Hybrid Process of Membrane Distillation Process and Electro-Fenton
Authors: Afrouz Yousefi, Mohtada Sadrzadeh
Abstract:
The widespread presence of poly- and perfluoroalkyl substances (PFAS) poses a significant concern due to their ability to accumulate in living organisms and their persistence in the environment, thanks to their robust carbon-fluorine (C-F) bonds, which require substantial energy to break (485 kJ/mol). The prevalence of toxic PFAS compounds can be highly detrimental to ecosystems, wildlife, and human health. Ongoing efforts are dedicated to investigating methods for fully breaking down and eliminating PFAS from the environment. Among the various techniques employed, advanced oxidation processes have shown promise in completely breaking down emerging contaminants in wastewater. However, the drawback lies in the relatively slow reaction rates of these processes and the substantial energy input required, which currently impedes their widespread commercial adoption. We developed a hybrid process, comprising electro-Fenton as an advanced oxidation process and membrane distillation, to simultaneously degrade organic PFAS pollutants and extract pure water from the mixture. In this study, environmentally persistent perfluorooctanoic acid (PFOA), as an emerging contaminant, was used to study the effectiveness of the electro-Fenton/membrane distillation hybrid system. The PFOA degradation studies were conducted in two modes: electro-Fenton and electro-Fenton coupled with membrane distillation. High-performance liquid chromatography with ultraviolet detection (HPLC-UV), ion-chromatography (measuring fluoride ion concentration), total organic carbon (TOC) decay, mineralization current efficiency (MCE), and specific energy consumption (SEC) were evaluated for a single EF and hybrid EF-MD processes. In contrast to a single EF reaction, TOC decay improved significantly in the EF-MD process. Overall, the MCE of hybrid processes surpassed 100% while it remained under 50% for a single EF reaction. Calculations of specific energy consumption (SEC) demonstrated a substantial decrease of nearly one-third in energy usage when integrating the EF reaction with the MD process.Keywords: water treatment, PFAS, membrane distillation, electro-Fenton, advanced oxidation
Procedia PDF Downloads 72510 A Preliminary Analysis of The Effect After Cochlear Implantation in the Unilateral Hearing Loss
Authors: Haiqiao Du, Qian Wang, Shuwei Wang, Jianan Li
Abstract:
Purpose: The aim is to evaluate the effect of cochlear implantation (CI) in patients with unilateral hearing loss, with a view to providing data support for the selection of therapeutic interventions for patients with single-sided deafness (SSD)/asymmetric hearing loss (AHL) and the broadening of the indications for CI. Methods: The study subjects were patients with unilateral hearing loss who underwent cochlear implantation surgery in our hospital in August 2022 and were willing to cooperate with the test and were divided into 2 groups: SSD group and AHL group. The enrolled patients were followed up for hearing level, tinnitus changes, speech recognition ability, sound source localization ability, and quality of life at five-time points: preoperatively, and 1, 3, 6, and 12 months after postoperative start-up. Results: As of June 30, 2024, a total of nine patients completed follow-up, including four in the SSD group and five in the AHL group. The mean postoperative hearing aid thresholds on the CI side were 31.56 dB HL and 34.75 dB HL in the two groups, respectively. Of the four patients with preoperative tinnitus symptoms (three patients in the SSD group and one patient in the AHL group), all showed a degree of reduction in Tinnitus Handicap Inventory (THI) scores, except for one patient who showed no change. In both the SSD and AHL groups, the sound source localization results (expressed as RMS error values, with smaller values indicating better ability) were 66.87° and 77.41° preoperatively and 29.34° and 54.60° 12 months after postoperative start-up, respectively, which showed that the ability to localize the sound source improved significantly with longer implantation time. The level of speech recognition was assessed by 3 test methods: speech recognition rate of monosyllabic words in a quiet environment and speech recognition rate of different sound source directions at 0° and 90° (implantation side) in a noisy environment. The results of the 3 tests were 99.0%, 72.0%, and 36.0% in the preoperative SSD group and 96.0%, 83.6%, and 73.8% in the AHL group, respectively, whereas they fluctuated in the postoperative period 3 months after start-up, and stabilized at 12 months after start-up to 99.0%, 100.0%, and 100.0% in the SSD group and 99.5%, 96.0%, and 99.0%. Quality of life was subjectively evaluated by three tests: the Speech Spatial Quality of Sound Auditory Scale (SSQ-12), the Quality-of-Life Bilateral Listening Questionnaire (QLBHE), and the Nijmegen Cochlear Implantation Inventory (NCIQ). The results of the SSQ-12 (with a 10-point score out of 10) showed that the scores of preoperative and postoperative 12 months after start-up were 6.35 and 6.46 in the SSD group, while they were 5.61 and 9.83 in the AHL group. The QLBHE scores (100 points out of 100) were 61.0 and 76.0 in the SSD group and 53.4 and 63.7 in the AHL group for the preoperative versus the postoperative 12 months after start-up. Conclusion: Patients with unilateral hearing loss can benefit from cochlear implantation: CI implantation is effective in compensating for the hearing on the affected side and reduces the accompanying tinnitus symptoms; there is a significant improvement in sound source localization and speech recognition in the presence of noise; and the quality of life is improved.Keywords: single-sided deafness, asymmetric hearing loss, cochlear implant, unilateral hearing loss
Procedia PDF Downloads 21509 Industrial Waste Multi-Metal Ion Exchange
Authors: Thomas S. Abia II
Abstract:
Intel Chandler Site has internally developed its first-of-kind (FOK) facility-scale wastewater treatment system to achieve multi-metal ion exchange. The process was carried out using a serial process train of carbon filtration, pH / ORP adjustment, and cationic exchange purification to treat dilute metal wastewater (DMW) discharged from a substrate packaging factory. Spanning a trial period of 10 months, a total of 3,271 samples were collected and statistically analyzed (average baseline + standard deviation) to evaluate the performance of a 95-gpm, multi-reactor continuous copper ion exchange treatment system that was consequently retrofitted for manganese ion exchange to meet environmental regulations. The system is also equipped with an inline acid and hot caustic regeneration system to rejuvenate exhausted IX resins and occasionally remove surface crud. Data generated from lab-scale studies was transferred to system operating modifications following multiple trial-and-error experiments. Despite the DMW treatment system failing to meet internal performance specifications for manganese output, it was observed to remove the cation notwithstanding the prevalence of copper in the waste stream. Accordingly, the average manganese output declined from 6.5 + 5.6 mg¹L⁻¹ at pre-pilot to 1.1 + 1.2 mg¹L⁻¹ post-pilot (83% baseline reduction). This milestone was achieved regardless of the average influent manganese to DMW increasing from 1.0 + 13.7 mg¹L⁻¹ at pre-pilot to 2.1 + 0.2 mg¹L⁻¹ post-pilot (110% baseline uptick). Likewise, the pre-trial and post-trial average influent copper values to DMW were 22.4 + 10.2 mg¹L⁻¹ and 32.1 + 39.1 mg¹L⁻¹, respectively (43% baseline increase). As a result, the pre-trial and post-trial average copper output values were 0.1 + 0.5 mg¹L⁻¹ and 0.4 + 1.2 mg¹L⁻¹, respectively (300% baseline uptick). Conclusively, the operating pH range upstream of treatment (between 3.5 and 5) was shown to be the largest single point of influence for optimizing manganese uptake during multi-metal ion exchange. However, the high variability of the influent copper-to-manganese ratio was observed to adversely impact the system functionality. The journal herein intends to discuss the operating parameters such as pH and oxidation-reduction potential (ORP) that were shown to influence the functional versatility of the ion exchange system significantly. The literature also proposes to discuss limitations of the treatment system such as influent copper-to-manganese ratio variations, operational configuration, waste by-product management, and system recovery requirements to provide a balanced assessment of the multi-metal ion exchange process. The take-away from this literature is intended to analyze the overall feasibility of ion exchange for metals manufacturing facilities that lack the capability to expand hardware due to real estate restrictions, aggressive schedules, or budgetary constraints.Keywords: copper, industrial wastewater treatment, multi-metal ion exchange, manganese
Procedia PDF Downloads 146508 Molecular Approach for the Detection of Lactic Acid Bacteria in the Kenyan Spontaneously Fermented Milk, Mursik
Authors: John Masani Nduko, Joseph Wafula Matofari
Abstract:
Many spontaneously fermented milk products are produced in Kenya, where they are integral to the human diet and play a central role in enhancing food security and income generation via small-scale enterprises. Fermentation enhances product properties such as taste, aroma, shelf-life, safety, texture, and nutritional value. Some of these products have demonstrated therapeutic and probiotic effects although recent reports have linked some to death, biotoxin infections, and esophageal cancer. These products are mostly processed from poor quality raw materials under unhygienic conditions resulting to inconsistent product quality and limited shelf-lives. Though very popular, research on their processing technologies is low, and none of the products has been produced under controlled conditions using starter cultures. To modernize the processing technologies for these products, our study aims at describing the microbiology and biochemistry of a representative Kenyan spontaneously fermented milk product, Mursik using modern biotechnology (DNA sequencing) and their chemical composition. Moreover, co-creation processes reflecting stakeholders’ experiences on traditional fermented milk production technologies and utilization, ideals and senses of value, which will allow the generation of products based on common ground for rapid progress will be discussed. Knowledge of the value of clean starting raw material will be emphasized, the need for the definition of fermentation parameters highlighted, and standard equipment employment to attain controlled fermentation discussed. This presentation will review the available information regarding traditional fermented milk (Mursik) and highlight our current research work on the application of molecular approaches (metagenomics) for the valorization of Mursik production process through starter culture/ probiotic strains isolation and identification, and quality and safety aspects of the product. The importance of the research and future research areas on the same subject will also be highlighted.Keywords: lactic acid bacteria, high throughput biotechnology, spontaneous fermentation, Mursik
Procedia PDF Downloads 297507 Biodegradation of Endoxifen in Wastewater: Isolation and Identification of Bacteria Degraders, Kinetics, and By-Products
Authors: Marina Arino Martin, John McEvoy, Eakalak Khan
Abstract:
Endoxifen is an active metabolite responsible for the effectiveness of tamoxifen, a chemotherapeutic drug widely used for endocrine responsive breast cancer and chemo-preventive long-term treatment. Tamoxifen and endoxifen are not completely metabolized in human body and are actively excreted. As a result, they are released to the water environment via wastewater treatment plants (WWTPs). The presence of tamoxifen in the environment produces negative effects on aquatic lives due to its antiestrogenic activity. Because endoxifen is 30-100 times more potent than tamoxifen itself and also presents antiestrogenic activity, its presence in the water environment could result in even more toxic effects on aquatic lives compared to tamoxifen. Data on actual concentrations of endoxifen in the environment is limited due to recent discovery of endoxifen pharmaceutical activity. However, endoxifen has been detected in hospital and municipal wastewater effluents. The detection of endoxifen in wastewater effluents questions the treatment efficiency of WWTPs. Studies reporting information about endoxifen removal in WWTPs are also scarce. There was a study that used chlorination to eliminate endoxifen in wastewater. However, an inefficient degradation of endoxifen by chlorination and the production of hazardous disinfection by-products were observed. Therefore, there is a need to remove endoxifen from wastewater prior to chlorination in order to reduce the potential release of endoxifen into the environment and its possible effects. The aim of this research is to isolate and identify bacteria strain(s) capable of degrading endoxifen into less hazardous compound(s). For this purpose, bacteria strains from WWTPs were exposed to endoxifen as a sole carbon and nitrogen source for 40 days. Bacteria presenting positive growth were isolated and tested for endoxifen biodegradation. Endoxifen concentration and by-product formation were monitored. The Monod kinetic model was used to determine endoxifen biodegradation rate. Preliminary results of the study suggest that isolated bacteria from WWTPs are able to growth in presence of endoxifen as a sole carbon and nitrogen source. Ongoing work includes identification of these bacteria strains and by-product(s) of endoxifen biodegradation.Keywords: biodegradation, bacterial degraders, endoxifen, wastewater
Procedia PDF Downloads 217506 Applications and Development of a Plug Load Management System That Automatically Identifies the Type and Location of Connected Devices
Authors: Amy Lebar, Kim L. Trenbath, Bennett Doherty, William Livingood
Abstract:
Plug and process loads (PPLs) account for 47% of U.S. commercial building energy use. There is a huge potential to reduce whole building consumption by targeting PPLs for energy savings measures or implementing some form of plug load management (PLM). Despite this potential, there has yet to be a widely adopted commercial PLM technology. This paper describes the Automatic Type and Location Identification System (ATLIS), a PLM system framework with automatic and dynamic load detection (ADLD). ADLD gives PLM systems the ability to automatically identify devices as they are plugged into the outlets of a building. The ATLIS framework takes advantage of smart, connected devices to identify device locations in a building, meter and control their power, and communicate this information to a central database. ATLIS includes five primary capabilities: location identification, communication, control, energy metering and data storage. A laboratory proof of concept (PoC) demonstrated all but the data storage capabilities and these capabilities were validated using an office building scenario. The PoC can identify when a device is plugged into an outlet and the location of the device in the building. When a device is moved, the PoC’s dashboard and database are automatically updated with the new location. The PoC implements controls to devices from the system dashboard so that devices maintain correct schedules regardless of where they are plugged in within a building. ATLIS’s primary technology application is improved PLM, but other applications include asset management, energy audits, and interoperability for grid-interactive efficient buildings. A system like ATLIS could also be used to direct power to critical devices, such as ventilators, during a brownout or blackout. Such a framework is an opportunity to make PLM more widespread and reduce the amount of energy consumed by PPLs in current and future commercial buildings.Keywords: commercial buildings, grid-interactive efficient buildings (GEB), miscellaneous electric loads (MELs), plug loads, plug load management (PLM)
Procedia PDF Downloads 136505 Food Composition Tables Used as an Instrument to Estimate the Nutrient Ingest in Ecuador
Authors: Ortiz M. Rocío, Rocha G. Karina, Domenech A. Gloria
Abstract:
There are several tools to assess the nutritional status of the population. A main instrument commonly used to build those tools is the food composition tables (FCT). Despite the importance of FCT, there are many error sources and variability factors that can be presented on building those tables and can lead to an under or over estimation of ingest of nutrients of a population. This work identified different food composition tables used as an instrument to estimate the nutrient ingest in Ecuador.The collection of data for choosing FCT was made through key informants –self completed questionnaires-, supplemented with institutional web research. A questionnaire with general variables (origin, year of edition, etc) and methodological variables (method of elaboration, information of the table, etc) was passed to the identified FCT. Those variables were defined based on an extensive literature review. A descriptive analysis of content was performed. Ten printed tables and three databases were reported which were all indistinctly treated as food composition tables. We managed to get information from 69% of the references. Several informants referred to printed documents that were not accessible. In addition, searching the internet was not successful. Of the 9 final tables, n=8 are from Latin America, and, n= 5 of these were constructed by indirect method (collection of already published data) having as a main source of information a database from the United States department of agriculture USDA. One FCT was constructed by using direct method (bromatological analysis) and has its origin in Ecuador. The 100% of the tables made a clear distinction of the food and its method of cooking, 88% of FCT expressed values of nutrients per 100g of edible portion, 77% gave precise additional information about the use of the table, and 55% presented all the macro and micro nutrients on a detailed way. The more complete FCT were: INCAP (Central America), Composition of foods (Mexico). The more referred table was: Ecuadorian food composition table of 1965 (70%). The indirect method was used for most tables within this study. However, this method has the disadvantage that it generates less reliable food composition tables because foods show variations in composition. Therefore, a database cannot accurately predict the composition of any isolated sample of a food product.In conclusion, analyzing the pros and cons, and, despite being a FCT elaborated by using an indirect method, it is considered appropriate to work with the FCT of INCAP Central America, given the proximity to our country and a food items list that is very similar to ours. Also, it is imperative to have as a reference the table of composition for Ecuadorian food, which, although is not updated, was constructed using the direct method with Ecuadorian foods. Hence, both tables will be used to elaborate a questionnaire with the purpose of assessing the food consumption of the Ecuadorian population. In case of having disparate values, we will proceed by taking just the INCAP values because this is an updated table.Keywords: Ecuadorian food composition tables, FCT elaborated by direct method, ingest of nutrients of Ecuadorians, Latin America food composition tables
Procedia PDF Downloads 436504 Magnetophotonics 3D MEMS/NEMS System for Quantitative Mitochondrial DNA Defect Profiling
Authors: Dar-Bin Shieh, Gwo-Bin Lee, Chen-Ming Chang, Chen Sheng Yeh, Chih-Chia Huang, Tsung-Ju Li
Abstract:
Mitochondrial defects have a significant impact in many human diseases and aging associated phenotypes. The pathogenic mitochondrial DNA (mtDNA) mutations are diverse and usually present as heteroplasmic. mtDNA 4977bps deletion is one of the common mtDNA defects, and the ratio of mutated versus normal copy is significantly associated with clinical symptoms thus their quantitative detection has become an important unmet needs for advanced disease diagnosis and therapeutic guidelines. This study revealed a Micro-electro-mechanical-system (MEMS) enabled automatic microfluidic chip that only required minimal sample. The system integrated multiple laboratory operation steps into a Lab-on-a-Chip for high-sensitive and prompt measurement. The entire process including magnetic nanoparticle based mtDNA extraction in chip, mutation selective photonic DNA cleavage, and nanoparticle accelerated photonic quantitative polymerase chain reaction (qPCR). All subsystems were packed inside a miniature three-dimensional micro structured system and operated in an automatic manner. Integration of magnetic beads with microfluidic transportation could promptly extract and enrich the specific mtDNA. The near infrared responsive magnetic nanoparticles enabled micro-PCR to be operated by pulse-width-modulation controlled laser pulsing to amplify the desired mtDNA while quantified by fluorescence intensity captured by a complementary metal oxide system array detector. The proportions of pathogenic mtDNA in total DNA were thus obtained. Micro capillary electrophoresis module was used to analyze the amplicone products. In conclusion, this study demonstrated a new magnetophotonic based qPCR MEMS system that successfully detects and quantify specific disease related DNA mutations thus provides a promising future for rapid diagnosis of mitochondria diseases.Keywords: mitochondrial DNA, micro-electro-mechanical-system, magnetophotonics, PCR
Procedia PDF Downloads 226503 Challenges of Outreach Team Leaders in Managing Ward Based Primary Health Care Outreach Teams in National Health Insurance Pilot Districts in Kwazulu-Natal
Authors: E. M. Mhlongo, E. Lutge
Abstract:
In 2010, South Africa’s National Department of Health (NDoH) launched national primary health care (PHC) initiative to strengthen health promotion, disease prevention, and early disease detection. The strategy, called Re-engineering Primary Health Care (rPHC), aims to support a preventive and health-promoting community-based PHC model by using community-based outreach teams (known in South Africa as Ward-based Primary Health Care Outreach teams or WBPHCOTs). These teams provide health education, promote healthy behaviors, assess community health needs, manage minor health problems, and support linkages to health services and health facilities. Ward based primary health care outreach teams are supervised by a professional nurse who is the outreach team leader. In South Africa, the WBPHCOTs have been established, registered, and are reporting their activities in the District Health Information System (DHIS). This study explored and described the challenges faced by outreach team leaders in supporting and supervising the WBPHCOTs. Qualitative data were obtained through interviews conducted with the outreach team leaders at a sub-district level. Thematic analysis of data was done. Findings revealed some challenges faced by team leaders in day to day execution of their duties. Issues such as staff shortages, inadequate resources to carry out health promotion activities, and lack of co-operation from team members may undermine the capacity of team leaders to support and supervise the WBPHCOTs. Many community members are under the impression that the outreach team is responsible for bringing the clinic to the community while the outreach teams do not carry any medication/treatment with them when doing home visits. The study further highlights issues around the challenges of WBPHCOTs at a household level. In conclusion, the WBPHCOTs are an important component of National Health Insurance (NHI), and in order for NHI to be optimally implemented, the issues raised in this research should be addressed with some urgency.Keywords: community health worker, national health insurance, primary health care, ward-based primary health care outreach teams
Procedia PDF Downloads 142502 Research on the Conservation Strategy of Territorial Landscape Based on Characteristics: The Case of Fujian, China
Authors: Tingting Huang, Sha Li, Geoffrey Griffiths, Martin Lukac, Jianning Zhu
Abstract:
Territorial landscapes have experienced a gradual loss of their typical characteristics during long-term human activities. In order to protect the integrity of regional landscapes, it is necessary to characterize, evaluate and protect them in a graded manner. The study takes Fujian, China, as an example and classifies the landscape characters of the site at the regional scale, middle scale, and detailed scale. A multi-scale approach combining parametric and holistic approaches is used to classify and partition the landscape character types (LCTs) and landscape character areas (LCAs) at different scales, and a multi-element landscape assessment approach is adopted to explore the conservation strategies of the landscape character. Firstly, multiple fields and multiple elements of geography, nature and humanities were selected as the basis of assessment according to the scales. Secondly, the study takes a parametric approach to the classification and partitioning of landscape character, Principal Component Analysis, and two-stage cluster analysis (K-means and GMM) in MATLAB software to obtain LCTs, combines with Canny Operator Edge Detection Algorithm to obtain landscape character contours and corrects LCTs and LCAs by field survey and manual identification methods. Finally, the study adopts the Landscape Sensitivity Assessment method to perform landscape character conservation analysis and formulates five strategies for different LCAs: conservation, enhancement, restoration, creation, and combination. This multi-scale identification approach can efficiently integrate multiple types of landscape character elements, reduce the difficulty of broad-scale operations in the process of landscape character conservation, and provide a basis for landscape character conservation strategies. Based on the natural background and the restoration of regional characteristics, the results of landscape character assessment are scientific and objective and can provide a strong reference in regional and national scale territorial spatial planning.Keywords: parameterization, multi-scale, landscape character identify, landscape character assessment
Procedia PDF Downloads 104501 Study of Relation between P53 and Mir-146a Rs2910164 Polymorphism in Cervical Lesion
Authors: Hossein Rassi, Marjan Moradi Fard, Masoud Houshmand
Abstract:
Background: Cervical cancer is multistep disease that is thought to result from an interaction between genetic background and environmental factors. Human papillomavirus (HPV) infection is the leading risk factor for cervical intraepithelial neoplasia(CIN)and cervical cancer. In other hand, some of p53 and miRNA polymorphism may plays an important role in carcinogenesis. This study attempts to clarify the relation of p53 genotypes and miR-146a rs2910164 polymorphism in cervical lesions. Method: Forty two archival samples with cervical lesion retired from Khatam hospital and 40 sample from healthy persons used as control group. A simple and rapid method was used to detect the simultaneous amplification of the HPV consensus L1 region and HPV-16,-18, -11, -31, 33 and -35 along with the b-globin gene as an internal control. We use Multiplex PCR for detection of P53 and miR-146a rs2910164 genotypes in our lab. Finally, data analysis was performed using the 7 version of the Epi Info(TM) 2012 software and test chi-square(x2) for trend. Results: Cervix lesions were collected from 42 patients with Squamous metaplasia, cervical intraepithelial neoplasia, and cervical carcinoma. Successful DNA extraction was assessed by PCR amplification of b-actin gene (99bp). According to the results, p53 GG genotype and miR-146a rs2910164 CC genotype was significantly associated with increased risk of cervical lesions in the study population. In this study, we detected 13 HPV 18 from 42 cervical cancer. Conclusion: The connection between several SNP polymorphism and human virus papilloma in rare researches were seen. The reason of these differences in researches' findings can result in different kinds of races and geographic situations and also differences in life grooves in every region. The present study provided preliminary evidence that a p53 GG genotype and miR-146a rs2910164 CC genotype may effect cervical cancer risk in the study population, interacting synergistically with HPV 18 genotype. Our results demonstrate that the testing of p53 codon 72 polymorphism genotypes and miR-146a rs2910164 polymorphism genotypes in combination with HPV18 can serve as major risk factors in the early identification of cervical cancers. Furthermore, the results indicate the possibility of primary prevention of cervical cancer by vaccination against HPV18 in Iran.Keywords: cervical cancer, p53, miR-146a, rs2910164, polymorphism
Procedia PDF Downloads 470500 Investigation p53 and miR-146a rs2910164 Polymorphism in Cervical Lesion
Authors: Hossein Rassi, Marjan Moradi fard, Masoud Houshmand
Abstract:
Background: Cervical cancer is multistep disease that is thought to result from an interaction between genetic background and environmental factors. Human Papillomavirus (HPV) infection is the leading risk factor for Cervical Intraepithelial Neoplasia (CIN) and cervical cancer. In other hand, some of p53 and miRNA polymorphism may plays an important role in carcinogenesis. This study attempts to clarify the relation of p53 genotypes and miR-146a rs2910164 polymorphism in cervical lesions. Method: Forty two archival samples with cervical lesion retired from Khatam hospital and 40 sample from healthy persons used as control group. A simple and rapid method was used to detect the simultaneous amplification of the HPV consensus L1 region and HPV-16,-18, -11, -31, 33, and -35 along with the b-globin gene as an internal control. We use Multiplex PCR for detection of P53 and miR-146a rs2910164 genotypes in our lab. Finally, data analysis was performed using the 7 version of the Epi Info(TM) 2012 software and test chi-square(x2) for trend. Results: Cervix lesions were collected from 42 patients with Squamous metaplasia, cervical intraepithelial neoplasia, and cervical carcinoma. Successful DNA extraction was assessed by PCR amplification of b-actin gene (99 bp). According to the results, p53 GG genotype and miR-146a rs2910164 CC genotype was significantly associated with increased risk of cervical lesions in the study population. In this study, we detected 13 HPV 18 from 42 cervical cancer. Conclusion: The connection between several SNP polymorphism and human virus papilloma in rare researches were seen. The reason of these differences in researches' findings can result in different kinds of races and geographic situations and also differences in life grooves in every region. The present study provided preliminary evidence that a p53 GG genotype and miR-146a rs2910164 CC genotype may effect cervical cancer risk in the study population, interacting synergistically with HPV 18 genotype. Our results demonstrate that the testing of p53 codon 72 polymorphism genotypes and miR-146a rs2910164 polymorphism genotypes in combination with HPV18 can serve as major risk factors in the early identification of cervical cancers. Furthermore, the results indicate the possibility of primary prevention of cervical cancer by vaccination against HPV18 in Iran.Keywords: cervical cancer, miR-146a rs2910164 polymorphism, p53 polymorphism, intraepithelial, neoplasia, HPV
Procedia PDF Downloads 403499 Lead Chalcogenide Quantum Dots for Use in Radiation Detectors
Authors: Tom Nakotte, Hongmei Luo
Abstract:
Lead chalcogenide-based (PbS, PbSe, and PbTe) quantum dots (QDs) were synthesized for the purpose of implementing them in radiation detectors. Pb based materials have long been of interest for gamma and x-ray detection due to its high absorption cross section and Z number. The emphasis of the studies was on exploring how to control charge carrier transport within thin films containing the QDs. The properties of QDs itself can be altered by changing the size, shape, composition, and surface chemistry of the dots, while the properties of carrier transport within QD films are affected by post-deposition treatment of the films. The QDs were synthesized using colloidal synthesis methods and films were grown using multiple film coating techniques, such as spin coating and doctor blading. Current QD radiation detectors are based on the QD acting as fluorophores in a scintillation detector. Here the viability of using QDs in solid-state radiation detectors, for which the incident detectable radiation causes a direct electronic response within the QD film is explored. Achieving high sensitivity and accurate energy quantification in QD radiation detectors requires a large carrier mobility and diffusion lengths in the QD films. Pb chalcogenides-based QDs were synthesized with both traditional oleic acid ligands as well as more weakly binding oleylamine ligands, allowing for in-solution ligand exchange making the deposition of thick films in a single step possible. The PbS and PbSe QDs showed better air stability than PbTe. After precipitation the QDs passivated with the shorter ligand are dispersed in 2,6-difloupyridine resulting in colloidal solutions with concentrations anywhere from 10-100 mg/mL for film processing applications, More concentrated colloidal solutions produce thicker films during spin-coating, while an extremely concentrated solution (100 mg/mL) can be used to produce several micrometer thick films using doctor blading. Film thicknesses of micrometer or even millimeters are needed for radiation detector for high-energy gamma rays, which are of interest for astrophysics or nuclear security, in order to provide sufficient stopping power.Keywords: colloidal synthesis, lead chalcogenide, radiation detectors, quantum dots
Procedia PDF Downloads 134498 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs Based on Machine Learning Algorithms
Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios
Abstract:
Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity, and aflatoxinogenic capacity of the strains, topography, soil, and climate parameters of the fig orchards, are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive, and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), the concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P, and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques, i.e., dimensionality reduction on the original dataset (principal component analysis), metric learning (Mahalanobis metric for clustering), and k-nearest neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson correlation coefficient (PCC) between observed and predicted values.Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction
Procedia PDF Downloads 187497 Comparing Remote Sensing and in Situ Analyses of Test Wheat Plants as Means for Optimizing Data Collection in Precision Agriculture
Authors: Endalkachew Abebe Kebede, Bojin Bojinov, Andon Vasilev Andonov, Orhan Dengiz
Abstract:
Remote sensing has a potential application in assessing and monitoring the plants' biophysical properties using the spectral responses of plants and soils within the electromagnetic spectrum. However, only a few reports compare the performance of different remote sensing sensors against in-situ field spectral measurement. The current study assessed the potential applications of open data source satellite images (Sentinel 2 and Landsat 9) in estimating the biophysical properties of the wheat crop on a study farm found in the village of OvchaMogila. A Landsat 9 (30 m resolution) and Sentinel-2 (10 m resolution) satellite images with less than 10% cloud cover have been extracted from the open data sources for the period of December 2021 to April 2022. An Unmanned Aerial Vehicle (UAV) has been used to capture the spectral response of plant leaves. In addition, SpectraVue 710s Leaf Spectrometer was used to measure the spectral response of the crop in April at five different locations within the same field. The ten most common vegetation indices have been selected and calculated based on the reflectance wavelength range of remote sensing tools used. The soil samples have been collected in eight different locations within the farm plot. The different physicochemical properties of the soil (pH, texture, N, P₂O₅, and K₂O) have been analyzed in the laboratory. The finer resolution images from the UAV and the Leaf Spectrometer have been used to validate the satellite images. The performance of different sensors has been compared based on the measured leaf spectral response and the extracted vegetation indices using the five sampling points. A scatter plot with the coefficient of determination (R2) and Root Mean Square Error (RMSE) and the correlation (r) matrix prepared using the corr and heatmap python libraries have been used for comparing the performance of Sentinel 2 and Landsat 9 VIs compared to the drone and SpectraVue 710s spectrophotometer. The soil analysis revealed the study farm plot is slightly alkaline (8.4 to 8.52). The soil texture of the study farm is dominantly Clay and Clay Loam.The vegetation indices (VIs) increased linearly with the growth of the plant. Both the scatter plot and the correlation matrix showed that Sentinel 2 vegetation indices have a relatively better correlation with the vegetation indices of the Buteo dronecompared to the Landsat 9. The Landsat 9 vegetation indices somewhat align better with the leaf spectrometer. Generally, the Sentinel 2 showed a better performance than the Landsat 9. Further study with enough field spectral sampling and repeated UAV imaging is required to improve the quality of the current study.Keywords: landsat 9, leaf spectrometer, sentinel 2, UAV
Procedia PDF Downloads 110496 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items
Authors: Wen-Chung Wang, Xue-Lan Qiu
Abstract:
Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison
Procedia PDF Downloads 248495 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms
Authors: Seulki Lee, Seoung Bum Kim
Abstract:
Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process
Procedia PDF Downloads 303494 Mutations in rpoB, katG and inhA Genes: The Association with Resistance to Rifampicin and Isoniazid in Egyptian Mycobacterium tuberculosis Clinical Isolates
Authors: Ayman K. El Essawy, Amal M. Hosny, Hala M. Abu Shady
Abstract:
The rapid detection of TB and drug resistance, both optimizes treatment and improves outcomes. In the current study, respiratory specimens were collected from 155 patients. Conventional susceptibility testing and MIC determination were performed for rifampicin (RIF) and isoniazid (INH). Genotype MTBDRplus assay, which is a molecular genetic assay based on the DNA-STRIP technology and specific gene sequencing with primers for rpoB, KatG, and mab-inhA genes were used to detect mutations associated with resistance to rifampicin and isoniazid. In comparison to other categories, most of rifampicin resistant (61.5%) and isoniazid resistant isolates (47.1%) were from patients relapsed in treatment. The genotypic profile (using Genotype MTBDRplus assay) of multi-drug resistant (MDR) isolates showed missing of katG wild type 1 (WT1) band and appearance of mutation band katG MUT2. For isoniazid mono-resistant isolates, 80% showed katG MUT1, 20% showed katG MUT1, and inhA MUT1, 20% showed only inhA MUT1. Accordingly, 100% of isoniazid resistant strains were detected by this assay. Out of 17 resistant strains, 16 had mutation bands for katG distinguished high resistance to isoniazid. The assay could clearly detect rifampicin resistance among 66.7% of MDR isolates that showed mutation band rpoB MUT3 while 33.3% of them were considered as unknown. One mono-resistant rifampicin isolate did not show rifampicin mutation bands by Genotype MTBDRplus assay, but it showed an unexpected mutation in Codon 531 of rpoB by DNA sequence analysis. Rifampicin resistance in this strain could be associated with a mutation in codon 531 of rpoB (based on molecular sequencing), and Genotype MTBDRplus assay could not detect the associated mutation. If the results of Genotype MTBDRplus assay and sequencing were combined, this strain shows hetero-resistance pattern. Gene sequencing of eight selected isolates, previously tested by Genotype MTBDRplus assay, could detect resistance mutations mainly in codon 315 (katG gene), position -15 in inhA promotes gene for isoniazid resistance and codon 531 (rpoB gene) for rifampicin resistance. Genotyping techniques allow distinguishing between recurrent cases of reinfection or reactivation and supports epidemiological studies.Keywords: M. tuberculosis, rpoB, KatG, inhA, genotype MTBDRplus
Procedia PDF Downloads 169493 The Role of Parental Stress and Emotion Regulation in Responding to Children’s Expression of Negative Emotion
Authors: Lizel Bertie, Kim Johnston
Abstract:
Parental emotion regulation plays a central role in the socialisation of emotion, especially when teaching young children to cope with negative emotions. Despite evidence which shows non-supportive parental responses to children’s expression of negative emotions has implications for the social and emotional development of the child, few studies have investigated risk factors which impact parental emotion socialisation processes. The current study aimed to explore the extent to which parental stress contributes to both difficulties in parental emotion regulation and non-supportive parental responses to children’s expression of negative emotions. In addition, the study examined whether parental use of expressive suppression as an emotion regulation strategy facilitates the influence of parental stress on non-supportive responses by testing the relations in a mediation model. A sample of 140 Australian adults, who identified as parents with children aged 5 to 10 years, completed an online questionnaire. The measures explored recent symptoms of depression, anxiety, and stress, the use of expressive suppression as an emotion regulation strategy, and hypothetical parental responses to scenarios related to children’s expression of negative emotions. A mediated regression indicated that parents who reported higher levels of stress also reported higher levels of expressive suppression as an emotion regulation strategy and increased use of non-supportive responses in relation to young children’s expression of negative emotions. These findings suggest that parents who experience heightened symptoms of stress are more likely to both suppress their emotions in parent-child interaction and engage in non-supportive responses. Furthermore, higher use of expressive suppression strongly predicted the use of non-supportive responses, despite the presence of parental stress. Contrary to expectation, no indirect effect of stress on non-supportive responses was observed via expressive suppression. The findings from the study suggest that parental stress may become a more salient manifestation of psychological distress in a sub-clinical population of parents while contributing to impaired parental responses. As such, the study offers support for targeting overarching factors such as difficulties in parental emotion regulation and stress management, not only as an intervention for parental psychological distress, but also the detection and prevention of maladaptive parenting practices.Keywords: emotion regulation, emotion socialisation, expressive suppression, non-supportive responses, parental stress
Procedia PDF Downloads 163492 Profiling of Apoptotic Protein Expressions after Trabectedin Treatment in Human Prostate Cancer Cell Line PC-3 by Protein Array Technology
Authors: Harika Atmaca, Emir Bozkurt, Latife Merve Oktay, Selim Uzunoglu, Ruchan Uslu, Burçak Karaca
Abstract:
Microarrays have been developed for highly parallel enzyme-linked immunosorbent assay (ELISA) applications. The most common protein arrays are produced by using multiple monoclonal antibodies, since they are robust molecules which can be easily handled and immobilized by standard procedures without loss of activity. Protein expression profiling with protein array technology allows simultaneous analysis of the protein expression pattern of a large number of proteins. Trabectedin, a tetrahydroisoquinoline alkaloid derived from a Caribbean tunicate, Ecteinascidia turbinata, has been shown to have antitumor effects. Here, we used a novel proteomic approach to explore the mechanism of action of trabectedin in prostate cancer cell line PC-3 by apoptosis antibody microarray. XTT cell proliferation kit and Cell Death Detection Elisa Plus Kit (Roche) was used for measuring cytotoxicity and apoptosis. Human Apoptosis Protein Array (R&D Systems) which consists of 35 apoptosis related proteins was used to assess the omic protein expression pattern. Trabectedin induced cytotoxicity and apoptosis in prostate cancer cells in a time and concentration-dependent manner. The expression levels of the death receptor pathway molecules, TRAIL-R1/DR4, TRAIL R2/DR5, TNF R1/TNFRSF1A, FADD were significantly increased by 4.0-, 21.0-, 4.20- and 11.5-fold by trabectedin treatment in PC-3 cells. Moreover, mitochondrial pathway related pro-apoptotic proteins Bax, Bad, Cytochrome c, and Cleaved Caspase-3 expressions were induced by 2.68-, 2.07-, 2.8-, and 4.5-fold and the expression levels of anti-apoptotic proteins Bcl-2 and Bcl-XL were reduced by 3.5- and 5.2-fold in PC-3 cells. Proteomic (antibody microarray) analysis suggests that the mechanism of action of trabectedin may be exerted via the induction of both intrinsic and extrinsic apoptotic pathways. The antibody microarray platform can be utilised to explore the molecular mechanism of action of novel anticancer agents.Keywords: trabectedin, prostate cancer, omic protein expression profile, apoptosis
Procedia PDF Downloads 446491 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech
Authors: Monica Gonzalez Machorro
Abstract:
Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment
Procedia PDF Downloads 129490 Comparison of Risk Analysis Methodologies Through the Consequences Identification in Chemical Accidents Associated with Dangerous Flammable Goods Storage
Authors: Daniel Alfonso Reséndiz-García, Luis Antonio García-Villanueva
Abstract:
As a result of the high industrial activity, which arises from the search to satisfy the needs of products and services for society, several chemical accidents have occurred, causing serious damage to different sectors: human, economic, infrastructure and environmental losses. Historically, with the study of this chemical accidents, it has been determined that the causes are mainly due to human errors (inexperienced personnel, negligence, lack of maintenance and deficient risk analysis). The industries have the aim to increase production and reduce costs. However, it should be kept in mind that the costs involved in risk studies, implementation of barriers and safety systems is much cheaper than paying for the possible damages that could occur in the event of an accident, without forgetting that there are things that cannot be replaced, such as human lives.Therefore, it is of utmost importance to implement risk studies in all industries, which provide information for prevention and planning. The aim of this study is to compare risk methodologies by identifying the consequences of accidents related to the storage of flammable, dangerous goods for decision making and emergency response.The methodologies considered in this study are qualitative and quantitative risk analysis and consequence analysis. The latter, by means of modeling software, which provides radius of affectation and the possible scope and magnitude of damages.By using risk analysis, possible scenarios of occurrence of chemical accidents in the storage of flammable substances are identified. Once the possible risk scenarios have been identified, the characteristics of the substances, their storage and atmospheric conditions are entered into the software.The results provide information that allows the implementation of prevention, detection, control, and combat elements for emergency response, thus having the necessary tools to avoid the occurrence of accidents and, if they do occur, to significantly reduce the magnitude of the damage.This study highlights the importance of risk studies applying tools that best suited to each case study. It also proves the importance of knowing the risk exposure of industrial activities for a better prevention, planning and emergency response.Keywords: chemical accidents, emergency response, flammable substances, risk analysis, modeling
Procedia PDF Downloads 98489 A LED Warning Vest as Safety Smart Textile and Active Cooperation in a Working Group for Building a Normative Standard
Authors: Werner Grommes
Abstract:
The institute of occupational safety and health works in a working group for building a normative standard for illuminated warning vests and did a lot of experiments and measurements as basic work (cooperation). Intelligent car headlamps are able to suppress conventional warning vests with retro-reflective stripes as a disturbing light. Illuminated warning vests are therefore required for occupational safety. However, they must not pose any danger to the wearer or other persons. Here, the risks of the batteries (lithium types), the maximum brightness (glare) and possible interference radiation from the electronics on the implant carrier must be taken into account. The all-around visibility, as well as the required range, play an important role here. For the study, many luminance measurements of already commercially available LEDs and electroluminescent warning vests, as well as their electromagnetic interference fields and aspects of electrical safety, were measured. The results of this study showed that LED lighting is all far too bright and causes strong glare. The integrated controls with pulse modulation and switching regulators cause electromagnetic interference fields. Rechargeable lithium batteries can explode depending on the temperature range. Electroluminescence brings even more hazards. A test method was developed for the evaluation of visibility at distances of 50, 100, and 150 m, including the interview of test persons. A measuring method was developed for the detection of glare effects at close range with the assignment of the maximum permissible luminance. The electromagnetic interference fields were tested in the time and frequency ranges. A risk and hazard analysis were prepared for the use of lithium batteries. The range of values for luminance and risk analysis for lithium batteries were discussed in the standards working group. These will be integrated into the standard. This paper gives a brief overview of the topics of illuminated warning vests, which takes into account the risks and hazards for the vest wearer or othersKeywords: illuminated warning vest, optical tests and measurements, risks, hazards, optical glare effects, LED, E-light, electric luminescent
Procedia PDF Downloads 116488 Disentangling the Sources and Context of Daily Work Stress: Study Protocol of a Comprehensive Real-Time Modelling Study Using Portable Devices
Authors: Larissa Bolliger, Junoš Lukan, Mitja Lustrek, Dirk De Bacquer, Els Clays
Abstract:
Introduction and Aim: Chronic workplace stress and its health-related consequences like mental and cardiovascular diseases have been widely investigated. This project focuses on the sources and context of psychosocial daily workplace stress in a real-world setting. The main objective is to analyze and model real-time relationships between (1) psychosocial stress experiences within the natural work environment, (2) micro-level work activities and events, and (3) physiological signals and behaviors in office workers. Methods: An Ecological Momentary Assessment (EMA) protocol has been developed, partly building on machine learning techniques. Empatica® wristbands will be used for real-life detection of stress from physiological signals; micro-level activities and events at work will be based on smartphone registrations, further processed according to an automated computer algorithm. A field study including 100 office-based workers with high-level problem-solving tasks like managers and researchers will be implemented in Slovenia and Belgium (50 in each country). Data mining and state-of-the-art statistical methods – mainly multilevel statistical modelling for repeated data – will be used. Expected Results and Impact: The project findings will provide novel contributions to the field of occupational health research. While traditional assessments provide information about global perceived state of chronic stress exposure, the EMA approach is expected to bring new insights about daily fluctuating work stress experiences, especially micro-level events and activities at work that induce acute physiological stress responses. The project is therefore likely to generate further evidence on relevant stressors in a real-time working environment and hence make it possible to advise on workplace procedures and policies for reducing stress.Keywords: ecological momentary assessment, real-time, stress, work
Procedia PDF Downloads 166487 Territorial Analysis of the Public Transport Supply: Case Study of Recife City
Authors: Cláudia Alcoforado, Anabela Ribeiro
Abstract:
This paper is part of an ongoing PhD thesis. It seeks to develop a model to identify the spatial failures of the public transportation supply. In the construction of the model, it also seeks to detect the social needs arising from the disadvantage in transport. The case study is carried out for the Brazilian city of Recife. Currently, Recife has a population density of 7,039.64 inhabitants per km². Unfortunately, only 46.9% of urban households on public roads have adequate urbanization. Allied to this reality, the trend of the occupation of the poorest population is that of the peripheries, a fact that has been consolidated in Brazil and Latin America, thus burdening the families' income, since the greater the distances covered for the basic activities and consequently also the transport costs. In this way, there have been great impacts caused by the supply of public transportation to locations with low demand or lack of urban infrastructure. The model under construction uses methods such as Currie’s Gap Assessment associated with the London’s Public Transport Access Level, and the Public Transport Accessibility Index developed by Saghapour. It is intended to present the stage of the thesis with the spatial/need gaps of the neighborhoods of Recife already detected. The benefits of the geographic information system are used in this paper. It should be noted that gaps are determined from the transport supply indices. In this case, considering the presence of walking catchment areas. Still in relation to the detection of gaps, the relevant demand index is also determined. This, in turn, is calculated through indicators that reflect social needs. With the use of the smaller Brazilian geographical unit, the census sector, the model with the inclusion of population density in the study areas should present more consolidated results. Based on the results achieved, an analysis of transportation disadvantage will be carried out as a factor of social exclusion in the study area. It is anticipated that the results obtained up to the present moment, already indicate a strong trend of public transportation in areas of higher income classes, leading to the understanding that the most disadvantaged population migrates to those neighborhoods in search of employment.Keywords: gap assessment, public transport supply, social exclusion, spatial gaps
Procedia PDF Downloads 185486 Research of Stalled Operational Modes of Axial-Flow Compressor for Diagnostics of Pre-Surge State
Authors: F. Mohammadsadeghi
Abstract:
Relevance of research: Axial compressors are used in both aircraft engine construction and ground-based gas turbine engines. The compressor is considered to be one of the main gas turbine engine units, which define absolute and relative indicators of engine in general. Failure of compressor often leads to drastic consequences. Therefore, safe (stable) operation must be maintained when using axial compressor. Currently, we can observe a tendency of increase of power unit, productivity, circumferential velocity and compression ratio of axial compressors in gas turbine engines of aircraft and ground-based application whereas metal consumption of their structure tends to fall. This causes the increase of dynamic loads as well as danger of damage of high load compressor or engine structure elements in general due to transient processes. In operating practices of aeronautical engineering and ground units with gas turbine drive the operational stability failure of gas turbine engines is one of relatively often failure causes what can lead to emergency situations. Surge occurrence is considered to be an absolute buckling failure. This is one of the most dangerous and often occurring types of instability. However detailed were the researches of this phenomenon the development of measures for surge before-the-fact prevention is still relevant. This is why the research of transient processes for axial compressors is necessary in order to provide efficient, stable and secure operation. The paper addresses the problem of automatic control system improvement by integrating the anti-surge algorithms for axial compressor of aircraft gas turbine engine. Paper considers dynamic exhaustion of gas dynamic stability of compressor stage, results of numerical simulation of airflow flowing through the airfoil at design and stalling modes, experimental researches to form the criteria that identify the compressor state at pre-surge mode detection. Authors formulated basic ways for developing surge preventing systems, i.e. forming the algorithms that allow detecting the surge origination and the systems that implement the proposed algorithms.Keywords: axial compressor, rotation stall, Surg, unstable operation of gas turbine engine
Procedia PDF Downloads 413485 Design of a Backlight Hyperspectral Imaging System for Enhancing Image Quality in Artificial Vision Food Packaging Online Inspections
Authors: Ferran Paulí Pla, Pere Palacín Farré, Albert Fornells Herrera, Pol Toldrà Fernández
Abstract:
Poor image acquisition is limiting the promising growth of industrial vision in food control. In recent years, the food industry has witnessed a significant increase in the implementation of automation in quality control through artificial vision, a trend that continues to grow. During the packaging process, some defects may appear, compromising the proper sealing of the products and diminishing their shelf life, sanitary conditions and overall properties. While failure to detect a defective product leads to major losses, food producers also aim to minimize over-rejection to avoid unnecessary waste. Thus, accuracy in the evaluation of the products is crucial, and, given the large production volumes, even small improvements have a significant impact. Recently, efforts have been focused on maximizing the performance of classification neural networks; nevertheless, their performance is limited by the quality of the input data. Monochrome linear backlight systems are most commonly used for online inspections of food packaging thermo-sealing zones. These simple acquisition systems fit the high cadence of the production lines imposed by the market demand. Nevertheless, they provide a limited amount of data, which negatively impacts classification algorithm training. A desired situation would be one where data quality is maximized in terms of obtaining the key information to detect defects while maintaining a fast working pace. This work presents a backlight hyperspectral imaging system designed and implemented replicating an industrial environment to better understand the relationship between visual data quality and spectral illumination range for a variety of packed food products. Furthermore, results led to the identification of advantageous spectral bands that significantly enhance image quality, providing clearer detection of defects.Keywords: artificial vision, food packaging, hyperspectral imaging, image acquisition, quality control
Procedia PDF Downloads 26484 MBES-CARIS Data Validation for the Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf
Authors: Abderrazak Bannari, Ghadeer Kadhem
Abstract:
The objectives of this paper are the validation and the evaluation of MBES-CARIS BASE surface data performance for bathymetric mapping of shallow water in the Kingdom of Bahrain. The latter is an archipelago with a total land area of about 765.30 km², approximately 126 km of coastline and 8,000 km² of marine area, located in the Arabian Gulf, east of Saudi Arabia and west of Qatar (26° 00’ N, 50° 33’ E). To achieve our objectives, bathymetric attributed grid files (X, Y, and depth) generated from the coverage of ship-track MBSE data with 300 x 300 m cells, processed with CARIS-HIPS, were downloaded from the General Bathymetric Chart of the Oceans (GEBCO). Then, brought into ArcGIS and converted into a raster format following five steps: Exportation of GEBCO BASE surface data to the ASCII file; conversion of ASCII file to a points shape file; extraction of the area points covering the water boundary of the Kingdom of Bahrain and multiplying the depth values by -1 to get the negative values. Then, the simple Kriging method was used in ArcMap environment to generate a new raster bathymetric grid surface of 30×30 m cells, which was the basis of the subsequent analysis. Finally, for validation purposes, 2200 bathymetric points were extracted from a medium scale nautical map (1:100 000) considering different depths over the Bahrain national water boundary. The nautical map was scanned, georeferenced and overlaid on the MBES-CARIS generated raster bathymetric grid surface (step 5 above), and then homologous depth points were selected. Statistical analysis, expressed as a linear error at the 95% confidence level, showed a strong correlation coefficient (R² = 0.96) and a low RMSE (± 0.57 m) between the nautical map and derived MBSE-CARIS depths if we consider only the shallow areas with depths of less than 10 m (about 800 validation points). When we consider only deeper areas (> 10 m) the correlation coefficient is equal to 0.73 and the RMSE is equal to ± 2.43 m while if we consider the totality of 2200 validation points including all depths, the correlation coefficient is still significant (R² = 0.81) with satisfactory RMSE (± 1.57 m). Certainly, this significant variation can be caused by the MBSE that did not completely cover the bottom in several of the deeper pockmarks because of the rapid change in depth. In addition, steep slopes and the rough seafloor probably affect the acquired MBSE raw data. In addition, the interpolation of missed area values between MBSE acquisition swaths-lines (ship-tracked sounding data) may not reflect the true depths of these missed areas. However, globally the results of the MBES-CARIS data are very appropriate for bathymetric mapping of shallow water areas.Keywords: bathymetry mapping, multibeam echosounder systems, CARIS-HIPS, shallow water
Procedia PDF Downloads 383