Search results for: Software as a Service (SaaS)
438 Use of Curcumin in Radiochemotherapy Induced Oral Mucositis Patients: A Control Trial Study
Authors: Shivayogi Charantimath
Abstract:
Radiotherapy and chemotherapy are effective for treating malignancies but are associated with side effects like oral mucositis. Chlorhexidine gluconate is one of the most commonly used mouthwash in prevention of signs and symptoms of mucositis. Evidence shows that chlorhexidine gluconate has side effects in terms of colonization of bacteria, bad breadth and less healing properties. Thus, it is essential to find a suitable alternative therapy which is more effective with minimal side effects. Curcumin, an extract of turmeric is gradually being studied for its wide-ranging therapeutic properties such as antioxidant, analgesic, anti-inflammatory, antitumor, antimicrobial, antiseptic, chemo sensitizing and radio sensitizing properties. The present study was conducted to evaluate the efficacy and safety of topical curcumin gel on radio-chemotherapy induced oral mucositis in cancer patients. The aim of the study is to evaluate the efficacy and safety of curcumin gel in the management of oral mucositis in cancer patients undergoing radio chemotherapy and compare with chlorhexidine. The study was conducted in K.L.E. Society’s Belgaum cancer hospital. 40 oral cancer patients undergoing the radiochemotheraphy with oral mucositis was selected and randomly divided into two groups of 20 each. The study group A [20 patients] was advised Cure next gel for 2 weeks. The control group B [20 patients] was advised chlorhexidine gel for 2 weeks. The NRS, Oral Mucositis Assessment scale and WHO mucositis scale were used to determine the grading. The results obtained were calculated by using SPSS 20 software. The comparison of grading was done by applying Mann-Whitney U test and intergroup comparison was calculated by Wilcoxon matched pairs test. The NRS scores observed from baseline to 1st and 2nd week follow up in both the group showed significant difference. The percentage of change in erythema in respect to group A was 63.3% for first week and for second week, changes were 100.0% with p = 0.0003. The changes in Group A in respect to erythema was 34.6% for 1st week and 57.7% in second week. The intergroup comparison was significant with p value of 0.0048 and 0.0006 in relation to group A and group B respectively. The size of the ulcer score was measured which showed 35.5% [P=0.0010] of change in Group A for 1st and 2nd week showed totally reduction i.e. 103.4% [P=0.0001]. Group B showed 24.7% change from baseline to 1st week and 53.6% for 2nd week follow up. The intergroup comparison with Wilcoxon matched pair test was significant with p=0.0001 in group A. The result obtained by WHO mucositis score in respect to group A shows 29.6% [p=0.0004] change in first week and 75.0% [p=0.0180] change in second week which is highly significant in comparison to group B. Group B showed minimum changes i.e. 20.1% in 1st week and 33.3% in 2nd week. The p value with Wilcoxon was significant with 0.0025 in Group A for 1st week follow up and 0.000 for 2nd week follow up. Curcumin gel appears to an effective and safer alternative to chlorhexidine gel in treatment of oral mucositis.Keywords: curcumin, chemotheraphy, mucositis, radiotheraphy
Procedia PDF Downloads 353437 The Achievements and Challenges of Physics Teachers When Implementing Problem-Based Learning: An Exploratory Study Applied to Rural High Schools
Authors: Osman Ali, Jeanne Kriek
Abstract:
Introduction: The current instructional approach entrenched in memorizing does not assist conceptual understanding in science. Instructional approaches that encourage research, investigation, and experimentation, which depict how scientists work, should be encouraged. One such teaching strategy is problem-based learning (PBL). PBL has many advantages; enhanced self-directed learning and improved problem-solving and critical thinking skills. However, despite many advantages, PBL has challenges. Research confirmed is time-consuming and difficult to formulate ill-structured questions. Professional development interventions are needed for in-service educators to adopt the PBL strategy. The purposively selected educators had to implement PBL in their classrooms after the intervention to develop their practice and then reflect on the implementation. They had to indicate their achievements and challenges. This study differs from previous studies as the rural educators were subjected to implementing PBL in their classrooms and reflected on their experiences, beliefs, and attitudes regarding PBL. Theoretical Framework: The study reinforced Vygotskian sociocultural theory. According to Vygotsky, the development of a child's cognitive is sustained by the interaction between the child and more able peers in his immediate environment. The theory suggests that social interactions in small groups create an opportunity for learners to form concepts and skills on their own better than working individually. PBL emphasized learning in small groups. Research Methodology: An exploratory case study was employed. The reason is that the study was not necessarily for specific conclusive evidence. Non-probability purposive sampling was adopted to choose eight schools from 89 rural public schools. In each school, two educators were approached, teaching physical sciences in grades 10 and 11 (N = 16). The research instruments were questionnaires, interviews, and lesson observation protocol. Two open-ended questionnaires were developed before and after intervention and analyzed thematically. Three themes were identified. The semi-structured interviews and responses were coded and transcribed into three themes. Subsequently, the Reform Teaching Observation Protocol (RTOP) was adopted for lesson observation and was analyzed using five constructs. Results: Evidence from analyzing the questionnaires before and after the intervention shows that participants knew better what was required to develop an ill-structured problem during the implementation. Furthermore, indications from the interviews are that participants had positive views about the PBL strategy. They stated that they only act as facilitators, and learners’ problem-solving and critical thinking skills are enhanced. They suggested a change in curriculum to adopt the PBL strategy. However, most participants may not continue to apply the PBL strategy stating that it is time-consuming and difficult to complete the Annual Teaching Plan (ATP). They complained about materials and equipment and learners' readiness to work. Evidence from RTOP shows that after the intervention, participants learn to encourage exploration and use learners' questions and comments to determine the direction and focus of classroom discussions.Keywords: problem-solving, self-directed, critical thinking, intervention
Procedia PDF Downloads 121436 Modeling of Tsunami Propagation and Impact on West Vancouver Island, Canada
Authors: S. Chowdhury, A. Corlett
Abstract:
Large tsunamis strike the British Columbia coast every few hundred years. The Cascadia Subduction Zone, which extends along the Pacific coast from Vancouver Island to Northern California is one of the most seismically active regions in Canada. Significant earthquakes have occurred in this region, including the 1700 Cascade Earthquake with an estimated magnitude of 9.2. Based on geological records, experts have predicted a 'great earthquake' of a similar magnitude within this region may happen any time. This earthquake is expected to generate a large tsunami that could impact the coastal communities on Vancouver Island. Since many of these communities are in remote locations, they are more likely to be vulnerable, as the post-earthquake relief efforts would be impacted by the damage to critical road infrastructures. To assess the coastal vulnerability within these communities, a hydrodynamic model has been developed using MIKE-21 software. We have considered a 500 year probabilistic earthquake design criteria including the subsidence in this model. The bathymetry information was collected from Canadian Hydrographic Services (CHS), and National Oceanic Atmospheric and Administration (NOAA). The arial survey was conducted using a Cessna-172 aircraft for the communities, and then the information was converted to generate a topographic digital elevation map. Both survey information was incorporated into the model, and the domain size of the model was about 1000km x 1300km. This model was calibrated with the tsunami occurred off the west coast of Moresby Island on October 28, 2012. The water levels from the model were compared with two tide gauge stations close to the Vancouver Island and the output from the model indicates the satisfactory result. For this study, the design water level was considered as High Water Level plus the Sea Level Rise for 2100 year. The hourly wind speeds from eight directions were collected from different wind stations and used a 200-year return period wind speed in the model for storm events. The regional model was set for 12 hrs simulation period, which takes more than 16 hrs to complete one simulation using double Xeon-E7 CPU computer plus a K-80 GPU. The boundary information for the local model was generated from the regional model. The local model was developed using a high resolution mesh to estimate the coastal flooding for the communities. It was observed from this study that many communities will be effected by the Cascadia tsunami and the inundation maps were developed for the communities. The infrastructures inside the coastal inundation area were identified. Coastal vulnerability planning and resilient design solutions will be implemented to significantly reduce the risk.Keywords: tsunami, coastal flooding, coastal vulnerable, earthquake, Vancouver, wave propagation
Procedia PDF Downloads 132435 Use of Locomotor Activity of Rainbow Trout Juveniles in Identifying Sublethal Concentrations of Landfill Leachate
Authors: Tomas Makaras, Gintaras Svecevičius
Abstract:
Landfill waste is a common problem as it has an economic and environmental impact even if it is closed. Landfill waste contains a high density of various persistent compounds such as heavy metals, organic and inorganic materials. As persistent compounds are slowly-degradable or even non-degradable in the environment, they often produce sublethal or even lethal effects on aquatic organisms. The aims of the present study were to estimate sublethal effects of the Kairiai landfill (WGS: 55°55‘46.74“, 23°23‘28.4“) leachate on the locomotor activity of rainbow trout Oncorhynchus mykiss juveniles using the original system package developed in our laboratory for automated monitoring, recording and analysis of aquatic organisms’ activity, and to determine patterns of fish behavioral response to sublethal effects of leachate. Four different concentrations of leachate were chosen: 0.125; 0.25; 0.5 and 1.0 mL/L (0.0025; 0.005; 0.01 and 0.002 as part of 96-hour LC50, respectively). Locomotor activity was measured after 5, 10 and 30 minutes of exposure during 1-minute test-periods of each fish (7 fish per treatment). The threshold-effect-concentration amounted to 0.18 mL/L (0.0036 parts of 96-hour LC50). This concentration was found to be even 2.8-fold lower than the concentration generally assumed to be “safe” for fish. At higher concentrations, the landfill leachate solution elicited behavioral response of test fish to sublethal levels of pollutants. The ability of the rainbow trout to detect and avoid contaminants occurred after 5 minutes of exposure. The intensity of locomotor activity reached a peak within 10 minutes, evidently decreasing after 30 minutes. This could be explained by the physiological and biochemical adaptation of fish to altered environmental conditions. It has been established that the locomotor activity of juvenile trout depends on leachate concentration and exposure duration. Modeling of these parameters showed that the activity of juveniles increased at higher leachate concentrations, but slightly decreased with the increasing exposure duration. Experiment results confirm that the behavior of rainbow trout juveniles is a sensitive and rapid biomarker that can be used in combination with the system for fish behavior monitoring, registration and analysis to determine sublethal concentrations of pollutants in ambient water. Further research should be focused on software improvement aimed to include more parameters of aquatic organisms’ behavior and to investigate the most rapid and appropriate behavioral responses in different species. In practice, this study could be the basis for the development and creation of biological early-warning systems (BEWS).Keywords: fish behavior biomarker, landfill leachate, locomotor activity, rainbow trout juveniles, sublethal effects
Procedia PDF Downloads 273434 Development of Polylactic Acid Insert with a Cinnamaldehyde-Betacyclodextrin Complex for Cape Gooseberry (Physalis Peruviana L.) Packed
Authors: Gómez S. Jennifer, Méndez V. Camila, Moncayo M. Diana, Vega M. Lizeth
Abstract:
The cape gooseberry is a climacteric fruit; Colombia is one of the principal exporters in the world. The environmental condition of temperature and relative moisture decreases the titratable acidity and pH. These conditions and fruit maturation result in the fungal proliferation of Botrytis cinerea disease. Plastic packaging for fresh cape gooseberries was used for mechanical damage protection but created a suitable atmosphere for fungal growth. Beta-cyclodextrins are currently implemented as coatings for the encapsulation of hydrophobic compounds, for example, with bioactive compounds from essential oils such as cinnamaldehyde, which has a high antimicrobial capacity. However, it is a volatile substance. In this article, the casting method was used to obtain a polylactic acid (PLA) polymer film containing the beta-cyclodextrin-cinnamaldehyde inclusion complex, generating an insert that allowed the controlled release of the antifungal substance in packed cape gooseberries to decrease contamination by Botrytis cinerea in a latent state during storage. For the encapsulation technique, three ratios for the cinnamaldehyde: beta-cyclodextrin inclusion complex were proposed: (25:75), (40:60), and (50:50). Spectrophotometry, colorimetry in L*a*b* coordinate space and scanning electron microscopy (SEM) were made for the complex characterization. Subsequently, two ratios of tween and water (40:60) and (50:50) were used to obtain the polylactic acid (PLA) film. To determine mechanical and physical parameters of colourimetry in L*a*b* coordinate space, atomic force microscopy and stereoscopy were done to determine the transparency and flexibility of the film; for both cases, Statgraphics software was used to determine the best ratio in each of the proposed phases, where for encapsulation it was (50:50) with an encapsulation efficiency of 65,92%, and for casting the ratio (40:60) obtained greater transparency and flexibility that permitted its incorporation into the polymeric packaging. A liberation assay was also developed under ambient temperature conditions to evaluate the concentration of cinnamaldehyde inside the packaging through gas chromatography for three weeks. It was found that the insert had a controlled release. Nevertheless, a higher cinnamaldehyde concentration is needed to obtain the minimum inhibitory concentration for the fungus Botrytis cinerea (0.2g/L). The homogeneity of the cinnamaldehyde gas phase inside the packaging can be improved by considering other insert configurations. This development aims to impact emerging food preservation technologies with the controlled release of antifungals to reduce the affectation of the physico-chemical and sensory properties of the fruit as a result of contamination by microorganisms in the postharvest stage.Keywords: antifungal, casting, encapsulation, postharvest
Procedia PDF Downloads 75433 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing
Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares
Abstract:
In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms
Procedia PDF Downloads 190432 Evaluation of Tensile Strength of Natural Fibres Reinforced Epoxy Composites Using Fly Ash as Filler Material
Authors: Balwinder Singh, Veerpaul Kaur Mann
Abstract:
A composite material is formed by the combination of two or more phases or materials. Natural minerals-derived Basalt fiber is a kind of fiber being introduced in the polymer composite industry due to its good mechanical properties similar to synthetic fibers and low cost, environment friendly. Also, there is a rising trend towards the use of industrial wastes as fillers in polymer composites with the aim of improving the properties of the composites. The mechanical properties of the fiber-reinforced polymer composites are influenced by various factors like fiber length, fiber weight %, filler weight %, filler size, etc. Thus, a detailed study has been done on the characterization of short-chopped Basalt fiber-reinforced polymer matrix composites using fly ash as filler. Taguchi’s L9 orthogonal array has been used to develop the composites by considering fiber length (6, 9 and 12 mm), fiber weight % (25, 30 and 35 %) and filler weight % (0, 5 and 10%) as input parameters with their respective levels and a thorough analysis on the mechanical characteristics (tensile strength and impact strength) has been done using ANOVA analysis with the help of MINITAB14 software. The investigation revealed that fiber weight is the most significant parameter affecting tensile strength, followed by fiber length and fiber weight %, respectively, while impact characterization showed that fiber length is the most significant factor, followed by fly ash weight, respectively. Introduction of fly ash proved to be beneficial in both the characterization with enhanced values upto 5% fly ash weight. The present study on the natural fibres reinforced epoxy composites using fly ash as filler material to study the effect of input parameters on the tensile strength in order to maximize tensile strength of the composites. Fabrication of composites based on Taguchi L9 orthogonal array design of experiments by using three factors fibre type, fibre weight % and fly ash % with three levels of each factor. The Optimization of composition of natural fibre reinforces composites using ANOVA for obtaining maximum tensile strength on fabricated composites revealed that the natural fibres along with fly ash can be successfully used with epoxy resin to prepare polymer matrix composites with good mechanical properties. Paddy- Paddy fibre gives high elasticity to the fibre composite due to presence of approximately hexagonal structure of cellulose present in paddy fibre. Coir- Coir fibre gives less tensile strength than paddy fibre as Coir fibre is brittle in nature when it pulls breakage occurs showing less tensile strength. Banana- Banana fibre has the least tensile strength in comparison to the paddy & coir fibre due to less cellulose content. Higher fibre weight leads to reduction in tensile strength due to increased nuclei of air pockets. Increasing fly ash content reduces tensile strength due to nonbonding of fly ash particles with natural fibre. Fly ash is also not very strong as compared to the epoxy resin leading to reduction in tensile strength.Keywords: tensile strength and epoxy resin. basalt Fiber, taguchi, polymer matrix, natural fiber
Procedia PDF Downloads 49431 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers
Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy
Abstract:
In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology
Procedia PDF Downloads 103430 Development of the Integrated Quality Management System of Cooked Sausage Products
Authors: Liubov Lutsyshyn, Yaroslava Zhukova
Abstract:
Over the past twenty years, there has been a drastic change in the mode of nutrition in many countries which has been reflected in the development of new products, production techniques, and has also led to the expansion of sales markets for food products. Studies have shown that solution of the food safety problems is almost impossible without the active and systematic activity of organizations directly involved in the production, storage and sale of food products, as well as without management of end-to-end traceability and exchange of information. The aim of this research is development of the integrated system of the quality management and safety assurance based on the principles of HACCP, traceability and system approach with creation of an algorithm for the identification and monitoring of parameters of technological process of manufacture of cooked sausage products. Methodology of implementation of the integrated system based on the principles of HACCP, traceability and system approach during the manufacturing of cooked sausage products for effective provision for the defined properties of the finished product has been developed. As a result of the research evaluation technique and criteria of performance of the implementation and operation of the system of the quality management and safety assurance based on the principles of HACCP have been developed and substantiated. In the paper regularities of influence of the application of HACCP principles, traceability and system approach on parameters of quality and safety of the finished product have been revealed. In the study regularities in identification of critical control points have been determined. The algorithm of functioning of the integrated system of the quality management and safety assurance has also been described and key requirements for the development of software allowing the prediction of properties of finished product, as well as the timely correction of the technological process and traceability of manufacturing flows have been defined. Based on the obtained results typical scheme of the integrated system of the quality management and safety assurance based on HACCP principles with the elements of end-to-end traceability and system approach for manufacture of cooked sausage products has been developed. As a result of the studies quantitative criteria for evaluation of performance of the system of the quality management and safety assurance have been developed. A set of guidance documents for the implementation and evaluation of the integrated system based on the HACCP principles in meat processing plants have also been developed. On the basis of the research the effectiveness of application of continuous monitoring of the manufacturing process during the control on the identified critical control points have been revealed. The optimal number of critical control points in relation to the manufacture of cooked sausage products has been substantiated. The main results of the research have been appraised during 2013-2014 under the conditions of seven enterprises of the meat processing industry and have been implemented at JSC «Kyiv meat processing plant».Keywords: cooked sausage products, HACCP, quality management, safety assurance
Procedia PDF Downloads 248429 Religious Government Interaction in Urban Settings
Authors: Rebecca Sager, Gary Adler, Damon Mayrl, Jonathan Cooley
Abstract:
The United States’ unique constitutional structure and religious roots have fostered the flourishing of local communities through the close interaction of church and state. Today, these local relationships play out in these circumstances, including increased religious diversity and changing jurisprudence to more accommodating church-state interaction. This project seeks to understand the meanings of church-state interaction among diverse religious leaders in a variety of local settings. Using data from interviews with over 200 religious leaders in six states in the US, we examine how religious groups interact with various non-elected and elected government officials. We have interviewed local religious actors in eight communities characterized by the difference in location and religious homogeneity. These include a small city within a major metropolitan area, several religiously diverse cities in various areas across the country, a small college town with religious diversity set in a religiously-homogenous rural area, and a small farming community with minimal religious diversity. We identified three types of religious actors in each of our geographic areas: congregations, religious non-profit organizations, and clergy coalitions. Given the well-known difficulties in identifying religious organizations, we used the following to construct a local population list from which to sample: the Association of Religion Data Archives ProPublica’s Nonprofit Explorer, Guidestar, and the Internal Revenue Service Exempt Business Master File. Our sample for selecting interviewees were stratified by three criteria: religious tradition (Christian v. non-Christian), sectarian orientation (Mainline/Catholic v. Evangelical Protestant), and organizational form (congregation vs. other). Each interview included the elicitation of local church-state interactions experienced by the organization and organizational members, the enumeration of information sources for navigating church-state interactions, and the personal and community background of interviewees. We coded interviews to identify the cognitive schema of “church” and “state,” the models of legitimate relations between the two, and discretion rules for managing interaction and avoiding conflict. We also enumerate arenas in which and issues for which local state officials are engaged. In this paper, we focus on Korean religious groups and examine how their interactions differ from other congregations, including other immigrant congregations. These churches were particularly common in one large metropolitan area. We find that Korean churches are much more likely to be concerned about any governmental interactions and have fewer connections than non-Korean churches leading to more disconnection from their communities. We argue that due to their status as new immigrant churches without a lot of community ties for many members and being in a large city, Korean churches were particularly concerned about too much interaction with any type of government officials, even ones that could be potentially helpful. While other immigrant churches were somewhat willing to work with government groups, such as Latino-based Catholic groups, Korean churches were the least likely to want to create these connections. Understanding these churches and how immigrant church identity varies and creates different types of interaction is crucial to understanding how church/state interaction can be more meaningful over space and place.Keywords: religion, congregations, government, politics
Procedia PDF Downloads 89428 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels
Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand
Abstract:
The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing
Procedia PDF Downloads 313427 Hypoglossal Nerve Stimulation (Baseline vs. 12 months) for Obstructive Sleep Apnea: A Meta-Analysis
Authors: Yasmeen Jamal Alabdallat, Almutazballlah Bassam Qablan, Hamza Al-Salhi, Salameh Alarood, Ibraheem Alkhawaldeh, Obada Abunar, Adam Abdallah
Abstract:
Obstructive sleep apnea (OSA) is a disorder caused by the repeated collapse of the upper airway during sleep. It is the most common cause of sleep-related breathing disorder, as OSA can cause loud snoring, daytime fatigue, or more severe problems such as high blood pressure, cardiovascular disease, coronary artery disease, insulin-resistant diabetes, and depression. The hypoglossal nerve stimulator (HNS) is an implantable medical device that reduces the occurrence of obstructive sleep apnea by electrically stimulating the hypoglossal nerve in rhythm with the patient's breathing, causing the tongue to move. This stimulation helps keep the patient's airways clear while they sleep. This systematic review and meta-analysis aimed to assess the clinical outcome of hypoglossal nerve stimulation as a treatment of obstructive sleep apnea. A computer literature search of PubMed, Scopus, Web of Science, and Cochrane Central Register of Controlled Trials was conducted from inception until August 2022. Studies assessing the following clinical outcomes (Apnea-Hypopnea Index (AHI), Epworth Sleepiness Scale (ESS), Functional Outcomes of Sleep Questionnaire (FOSQ), Oxygen Desaturation Indices (ODI), (Oxygen Saturation (SaO2)) were pooled in the meta-analysis using Review Manager Software. We assessed the quality of studies according to the Cochrane risk-of-bias tool for randomized trials (RoB2), Risk of Bias In Non-randomized Studies - of Interventions (ROBINS-I), and a modified version of NOS for the non-comparative cohort studies.13 Studies (Six Clinical Trials and Seven prospective cohort studies) with a total of 817 patients were included in the meta-analysis. The results of AHI were reported in 11 studies examining OSA 696 patients. We found that there was a significant improvement in the AHI after 12 months of HNS (MD = 18.2 with 95% CI, (16.7 to 19.7; I2 = 0%); P < 0.00001). Further, 12 studies reported the results of ESS after 12 months of intervention with a significant improvement in the range of sleepiness among the examined 757 OSA patients (MD = 5.3 with 95% CI, (4.75 to 5.86; I2 = 65%); P < 0.0001). Moreover, nine studies involving 699 participants reported the results of FOSQ after 12 months of HNS with a significant reported improvement (MD = -3.09 with 95% CI, (-3.41 to 2.77; I2 = 0%); P < 0.00001). In addition, ten studies reported the results of ODI with a significant improvement after 12 months of HNS among the 817 examined patients (MD = 14.8 with 95% CI, (13.25 to 16.32; I2 = 0%); P < 000001). The Hypoglossal Nerve Stimulation showed a significant positive impact on obstructive sleep apnea patients after 12 months of therapy in terms of apnea-hypopnea index, oxygen desaturation indices, manifestations of the behavioral morbidity associated with obstructive sleep apnea, and functional status resulting from sleepiness.Keywords: apnea, meta-analysis, hypoglossal, stimulation
Procedia PDF Downloads 116426 Novel Framework for MIMO-Enhanced Robust Selection of Critical Control Factors in Auto Plastic Injection Moulding Quality Optimization
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
Apparent quality defects such as warpage, shrinkage, weld line, etc. are such an irresistible phenomenon in mass production of auto plastic appearance parts. These frequently occurred manufacturing defects should be satisfied concurrently so as to achieve a final product with acceptable quality standards. Determining the significant control factors that simultaneously affect multiple quality characteristics can significantly improve the optimization results by eliminating the deviating effect of the so-called ineffective outliers. Hence, a robust quantitative approach needs to be developed upon which major control factors and their level can be effectively determined to help improve the reliability of the optimal processing parameter design. Hence, the primary objective of current study was to develop a systematic methodology for selection of significant control factors (SCF) relevant to multiple quality optimization of auto plastic appearance part. Auto bumper was used as a specimen with the most identical quality and production characteristics to APAP group. A preliminary failure modes and effect analysis (FMEA) was conducted to nominate a database of pseudo significant significant control factors prior to the optimization phase. Later, CAE simulation Moldflow analysis was implemented to manipulate four rampant plastic injection quality defects concerned with APAP group including warpage deflection, volumetric shrinkage, sink mark and weld line. Furthermore, a step-backward elimination searching method (SESME) has been developed for systematic pre-optimization selection of SCF based on hierarchical orthogonal array design and priority-based one-way analysis of variance (ANOVA). The development of robust parameter design in the second phase was based on DOE module powered by Minitab v.16 statistical software. Based on the F-test (F 0.05, 2, 14) one-way ANOVA results, it was concluded that for warpage deflection, material mixture percentage was the most significant control factor yielding a 58.34% of contribution while for the other three quality defects, melt temperature was the most significant control factor with a 25.32%, 84.25%, and 34.57% contribution for sin mark, shrinkage and weld line strength control. Also, the results on the he least significant control factors meaningfully revealed injection fill time as the least significant factor for both warpage and sink mark with respective 1.69% and 6.12% contribution. On the other hand, for shrinkage and weld line defects, the least significant control factors were holding pressure and mold temperature with a 0.23% and 4.05% overall contribution accordingly.Keywords: plastic injection moulding, quality optimization, FMEA, ANOVA, SESME, APAP
Procedia PDF Downloads 349425 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 146424 Research of the Factors Affecting the Administrative Capacity of Enterprises in the Logistic Sector of Bulgaria
Authors: R. Kenova, K. Anguelov, R. Nikolova
Abstract:
The human factor plays a major role in boosting the competitive capacity of logistic enterprises. This is of particular importance when it comes to logistic companies. On the one hand they should be strictly compliant with legislation; on the other hand, they should be competitive in terms of pricing and of delivery timelines. Moreover, their policies should allow them to be as flexible as possible. All these circumstances are reason for very serious challenges for the qualification, motivation and experience of the human resources, working in logistic companies or in logistic departments of trade and industrial enterprises. The geographic place of Bulgaria puts it in position of a country with some specific competitive advantages in the goods transport from Europe to Asia and back. Along with it, there is a number of logistic companies, that operate in this sphere in Bulgaria. In the current paper, the authors aim to establish the condition of the administrative capacity and human resources in the logistic companies and logistic departments of trade and industrial companies in Bulgaria in order to propose some guidelines for improving of their effectiveness. Due to independent empirical research, conducted in Bulgarian logistic, trade and industrial enterprises, the authors investigate both the impact degree and the interdependence of various factors that characterize the administrative capacity. The study is conducted with a prepared questionnaire, in format of direct interview with the respondents. The volume of the poll is 50 respondents, representatives of: general managers of industrial or trade enterprises; logistic managers of industrial or trade enterprises; general managers of forwarding companies – either with own or with hired transport; experts from Bulgarian association of logistics; logistic lobbyist and scientists of the relevant area. The data are gathered for 3 months, then arranged by a specialized software program and analyzed by preset criteria. Based on the results of this methodological toolbox, it can be claimed that there is a correlation between the individual criteria. Also, a commitment between the administrative capacity and other factors that determine the competitiveness of the studied companies is established. In this paper, the authors present results of the empirical research that concerns the number and the workload in the logistic departments of the enterprises. Also, what is commented is the experience, related to logistic processes management and human resources competence. Moreover, the overload level of the logistic specialists is analyzed as one of the main threats for making mistakes and losing clients. The paper stands behind the thesis that there is indispensability of forming an effective and efficient administrative capacity, based on the number, qualification, experience and motivation of the staff in the logistic companies. The paper ends with recommendations about the qualification and experience of the specialists in logistic departments; providing effective and efficient administrative capacity in the logistic departments; interdependence of the human factor and the other factors that influence the enterprise competitiveness.Keywords: administrative capacity, human resources, logistic competitiveness, staff qualification
Procedia PDF Downloads 153423 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers
Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala
Abstract:
The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification
Procedia PDF Downloads 165422 Evaluation of Yield and Yield Components of Malaysian Palm Oil Board-Senegal Oil Palm Germplasm Using Multivariate Tools
Authors: Khin Aye Myint, Mohd Rafii Yusop, Mohd Yusoff Abd Samad, Shairul Izan Ramlee, Mohd Din Amiruddin, Zulkifli Yaakub
Abstract:
The narrow base of genetic is the main obstacle of breeding and genetic improvement in oil palm industry. In order to broaden the genetic bases, the Malaysian Palm Oil Board has been extensively collected wild germplasm from its original area of 11 African countries which are Nigeria, Senegal, Gambia, Guinea, Sierra Leone, Ghana, Cameroon, Zaire, Angola, Madagascar, and Tanzania. The germplasm collections were established and maintained as a field gene bank in Malaysian Palm Oil Board (MPOB) Research Station in Kluang, Johor, Malaysia to conserve a wide range of oil palm genetic resources for genetic improvement of Malaysian oil palm industry. Therefore, assessing the performance and genetic diversity of the wild materials is very important for understanding the genetic structure of natural oil palm population and to explore genetic resources. Principal component analysis (PCA) and Cluster analysis are very efficient multivariate tools in the evaluation of genetic variation of germplasm and have been applied in many crops. In this study, eight populations of MPOB-Senegal oil palm germplasm were studied to explore the genetic variation pattern using PCA and cluster analysis. A total of 20 yield and yield component traits were used to analyze PCA and Ward’s clustering using SAS 9.4 version software. The first four principal components which have eigenvalue >1 accounted for 93% of total variation with the value of 44%, 19%, 18% and 12% respectively for each principal component. PC1 showed highest positive correlation with fresh fruit bunch (0.315), bunch number (0.321), oil yield (0.317), kernel yield (0.326), total economic product (0.324), and total oil (0.324) while PC 2 has the largest positive association with oil to wet mesocarp (0.397) and oil to fruit (0.458). The oil palm population were grouped into four distinct clusters based on 20 evaluated traits, this imply that high genetic variation existed in among the germplasm. Cluster 1 contains two populations which are SEN 12 and SEN 10, while cluster 2 has only one population of SEN 3. Cluster 3 consists of three populations which are SEN 4, SEN 6, and SEN 7 while SEN 2 and SEN 5 were grouped in cluster 4. Cluster 4 showed the highest mean value of fresh fruit bunch, bunch number, oil yield, kernel yield, total economic product, and total oil and Cluster 1 was characterized by high oil to wet mesocarp, and oil to fruit. The desired traits that have the largest positive correlation on extracted PCs could be utilized for the improvement of oil palm breeding program. The populations from different clusters with the highest cluster means could be used for hybridization. The information from this study can be utilized for effective conservation and selection of the MPOB-Senegal oil palm germplasm for the future breeding program.Keywords: cluster analysis, genetic variability, germplasm, oil palm, principal component analysis
Procedia PDF Downloads 166421 Breast Cancer Therapy-Related Cardiac Dysfunction Identifying in Kazakhstan: Preliminary Findings of the Cohort Study
Authors: Saule Balmagambetova, Zhenisgul Tlegenova, Saule Madinova
Abstract:
Cardiotoxicity associated with anticancer treatment, now defined as cancer therapy-related cardiac dysfunction (CTRCD), accompanies cancer patients and negatively impacts their survivorship. Currently, a cardio-oncological service is being created in Kazakhstan based on the provisions of the European Society of Cardio-oncology (ESC) Guidelines. In the frames of a pilot project, a cohort study on CTRCD conditions was initiated at the Aktobe Cancer center. One hundred twenty-eight newly diagnosed breast cancer patients started on doxorubicin and/or trastuzumab were recruited. Echocardiography with global longitudinal strain (GLS) assessment, biomarkers panel (cardiac troponin (cTnI), brain natriuretic peptide (BNP), myeloperoxidase (MPO), galectin-3 (Gal-3), D-dimers, C-reactive protein (CRP)), and other tests were performed at baseline and every three months. Patients were stratified by the cardiovascular risks according to the ESC recommendations and allocated into the risk groups during the pre-treatment visit. Of them, 10 (7.8%) patients were assigned to the high-risk group, 48 (37.5%) to the medium-risk group, and 70 (54.7%) to the low-risk group, respectively. High-risk patients have been receiving their cardioprotective treatment from the outset. Patients were also divided by treatment - in the anthracycline-based 83 (64.8%), in trastuzumab- only 13 (10.2%), and in the mixed anthracycline/trastuzumab group 32 individuals (25%), respectively. Mild symptomatic CTRCD was revealed and treated in 2 (1.6%) participants, and a mild asymptomatic variant in 26 (20.5%). Mild asymptomatic conditions are defined as left ventricular ejection fraction (LVEF) ≥50% and further relative reduction in GLS by >15% from baseline and/or a further rise in cardiac biomarkers. The listed biomarkers were assessed longitudinally in repeated-measures linear regression models during 12 months of observation. The associations between changes in biomarkers and CTRCD and between changes in biomarkers and LVEF were evaluated. Analysis by risk groups revealed statistically significant differences in baseline LVEF scores (p 0.001), BNP (p 0.0075), and Gal-3 (p 0.0073). Treatment groups found no statistically significant differences at baseline. After 12 months of follow-up, only LVEF values showed a statistically significant difference by risk groups (p 0.0011). When assessing the temporal changes in the studied parameters for all treatment groups, there were statistically significant changes from visit to visit for LVEF (p 0.003); GLS (p 0.0001); BNP (p<0.00001); MPO (p<0.0001); and Gal-3 (p<0.0001). No moderate or strong correlations were found between the biomarkers values and LVEF, between biomarkers and GLS. Between the biomarkers themselves, a moderate, close to strong correlation was established between cTnI and D-dimer (r 0.65, p<0.05). The dose-dependent effect of anthracyclines has been confirmed: the summary dose has a moderate negative impact on GLS values: -r 0.31 for all treatment groups (p<0.05). The present study found myeloperoxidase as a promising biomarker of cardiac dysfunction in the mixed anthracycline/trastuzumab treatment group. The hazard of CTRCD increased by 24% (HR 1.21; 95% CI 1.01;1.73) per doubling in baseline MPO value (p 0.041). Increases in BNP were also associated with CTRCD (HR per doubling, 1.22; 95% CI 1.12;1.69). No cases of chemotherapy discontinuation due to cardiotoxic complications have been recorded. Further observations are needed to gain insight into the ability of biomarkers to predict CTRCD onset.Keywords: breast cancer, chemotherapy, cardiotoxicity, Kazakhstan
Procedia PDF Downloads 92420 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City
Authors: Berhanu Keno Terfa
Abstract:
Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.
Procedia PDF Downloads 38419 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 205418 Removal of Heavy Metal Ions from Aqueous Solution by Polymer Enhanced Ultrafiltration Using Unmodified Starch as Biopolymer
Authors: Nurul Huda Baharuddin, Nik Meriam Nik Sulaiman, Mohammed Kheireddine Aroua
Abstract:
The effects of pH, polymer concentration, and metal ions feed concentration for four selected heavy metals Zn (II), Pb (II), Cr (III) and Cr (VI) were tested by using Polymer Enhanced Ultrafiltration (PEUF). An alternative biopolymer namely unmodified starch is proposed as a binding reagent in consequences, as compared to commonly used water-soluble polymers namely polyethylene glycol (PEG) and polyethyleneimine (PEI) in the removal of selected four heavy metal ions. The speciation species profiles of four selected complexes ions namely Zn (II), Pb (II), Cr (III) and Cr (VI) and the present of hydroxides ions (OH-) in variously charged ions were investigated by available software at certain pH range. In corresponds to identify the potential of complexation behavior between metal ion-polymers, potentiometric titration studies were obtained at first before carried out experimental works. Experimental works were done using ultrafiltration systems obtained by laboratory ultrafiltration bench scale equipped with 10 kDa polysulfone hollow fiber membrane. Throughout the laboratory works, the rejection coefficient and permeate flux were found to be significantly affected by the main operating parameter, namely the effects of pH, polymer composition and metal ions concentrations. The interaction of complexation between two binding polymers namely unmodified starch and PEG were occurred due to physical attraction of metal ions to the polymer on the molecular surface with high possibility of chemical occurrence. However, these selected metal ions are mainly complexes by polymer functional groups whenever there is interaction with PEI polymer. For study of single metal ions solutions, Zn (II) ions' rejections approaching over 90% were obtained at pH 7 for each tested polymer. This behavior was similar to Pb (II), Cr (III) and Cr (VI); where the rejections were obtained at lower acidic pH and increased at neutral pH of 7. Different behavior was found by Cr (VI) ions where a high rejection was only achieved at acidic pH region with PEI. Polymer concentration and metal ions concentration are found to have a significant effect on rejections. For mixed metal ion solutions, the behavior of metal ion rejections was similar to single metal ion solutions for investigation on the effects of pH. Rejection values were high at pH 7 for Zn (II) pH 7 for Zn (II) and Cr (III) ions, corresponding to higher rejections with unmodified starch. Pb (II) ions obtained high rejections when tested with PEG whenever carried out in mixed metal ion solutions. High Cr (VI) ions' rejection was found with PEI in single and mixed metal ions solutions at neutral pH range. The influence of starch’s granule structure towards the rejections of these four selected metal ions is found to be attracted in a non-ionic manner. No significant effects on permeate flux were obtained when tested at different pH ranges, polymer concentrations and metal ions feed either by single or mixtures metal ions solutions. Canizares Model was employed as the theoretical model to predict permeate flux and metal ions retention on the study of heavy metal ions removal.Keywords: polyethyleneimine, polyethylene glycol, polymer-enhanced ultrafiltration, unmodified starch
Procedia PDF Downloads 178417 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat
Authors: M. Venegas, M. De Vega, N. García-Hernando
Abstract:
Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy
Procedia PDF Downloads 286416 Inherent Difficulties in Countering Islamophobia
Authors: Imbesat Daudi
Abstract:
Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam
Procedia PDF Downloads 48415 Adaptation and Validation of Voice Handicap Index in Telugu Language
Authors: B. S. Premalatha, Kausalya Sahani
Abstract:
Background: Voice is multidimensional which convey emotion, feelings, and communication. Voice disorders have an adverse effect on the physical, emotional and functional domains of an individual. Self-rating by clients about their voice problem helps the clinicians to plan intervention strategies. Voice handicap index is one such self-rating scale contains 30 questions that quantify the functional, physical and emotional impacts of a voice disorder on a patient’s quality of life. Each subsection has 10 questions. Though adapted and validated versions of VHI are available in other Indian languages but not in Telugu, which is a Dravidian language native to India. It is mainly spoken in Andhra Pradesh and neighbouring states in southern India. Objectives: To adapt and validate the English version of Voice Handicap Index (VHI) into Telugu language and evaluate its internal consistency and clinical validate in Telugu speaking population. Materials: The study carried out in three stages. First stage was a forward translation of English version of VHI, was given to ten experts, who were well proficient in writing and reading Telugu and five speech-language pathologists to translate into Telugu. Second Stage was backward translation where translated version of Telugu was given to a different group of ten experts (who were well proficient in writing and reading Telugu) and five speech-language pathologists who were native Telugu speakers and had good proficiency in Telugu and English. The third stage was an administration of translated version on Telugu to the targeted population. Totally 40 clinical subjects and 40 normal controls served as participants, and each group had 26 males and 14 females’ age range of 20 to 60 years. Clinical group comprised of individuals with laryngectomee with the Tracheoesophageal puncture (n=18), laryngitis (n=11), vocal nodules (n=7) and vocal fold palsy (n=4). Participants were asked to mark of their each experience on a 5 point equal appearing scale (0=never, 1=almost never, 2=sometimes, 3=almost always, 4=always) with a maximum total score of 120. Results: Statistical analysis was made by using SPSS software (22.0.0 Version). Mean, standard deviation and percentage (%) were calculated all the participants for both the groups. Internal consistency of VHI in Telugu was found to be excellent with the consistency scores for all the domains such as physical, emotional and functional are 0.742, 0.934and 0.938. The validity of scores showed a significant difference between clinical population and control group for domains like physical, emotional and functional and total scores. P value found to be less than 0.001( < 0.001). Negative correlation found in age and gender among self-domains such as physical, emotional and functional total scores in dysphonic and control group. Conclusion: The present study indicated that VHI in Telugu is able to discriminate participants having voice pathology from normal populations, which make this as a valid tool to collect information about their voice from the participants.Keywords: adaptation, Telugu Version, translation, Voice Handicap Index (VHI)
Procedia PDF Downloads 279414 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling
Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé
Abstract:
Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation
Procedia PDF Downloads 80413 Life Cycle Datasets for the Ornamental Stone Sector
Authors: Isabella Bianco, Gian Andrea Blengini
Abstract:
The environmental impact related to ornamental stones (such as marbles and granites) is largely debated. Starting from the industrial revolution, continuous improvements of machineries led to a higher exploitation of this natural resource and to a more international interaction between markets. As a consequence, the environmental impact of the extraction and processing of stones has increased. Nevertheless, if compared with other building materials, ornamental stones are generally more durable, natural, and recyclable. From the scientific point of view, studies on stone life cycle sustainability have been carried out, but these are often partial or not very significant because of the high percentage of approximations and assumptions in calculations. This is due to the lack, in life cycle databases (e.g. Ecoinvent, Thinkstep, and ELCD), of datasets about the specific technologies employed in the stone production chain. For example, databases do not contain information about diamond wires, chains or explosives, materials commonly used in quarries and transformation plants. The project presented in this paper aims to populate the life cycle databases with specific data of specific stone processes. To this goal, the methodology follows the standardized approach of Life Cycle Assessment (LCA), according to the requirements of UNI 14040-14044 and to the International Reference Life Cycle Data System (ILCD) Handbook guidelines of the European Commission. The study analyses the processes of the entire production chain (from-cradle-to-gate system boundaries), including the extraction of benches, the cutting of blocks into slabs/tiles and the surface finishing. Primary data have been collected in Italian quarries and transformation plants which use technologies representative of the current state-of-the-art. Since the technologies vary according to the hardness of the stone, the case studies comprehend both soft stones (marbles) and hard stones (gneiss). In particular, data about energy, materials and emissions were collected in marble basins of Carrara and in Beola and Serizzo basins located in the province of Verbano Cusio Ossola. Data were then elaborated through an appropriate software to build a life cycle model. The model was realized setting free parameters that allow an easy adaptation to specific productions. Through this model, the study aims to boost the direct participation of stone companies and encourage the use of LCA tool to assess and improve the stone sector environmental sustainability. At the same time, the realization of accurate Life Cycle Inventory data aims at making available, to researchers and stone experts, ILCD compliant datasets of the most significant processes and technologies related to the ornamental stone sector.Keywords: life cycle assessment, LCA datasets, ornamental stone, stone environmental impact
Procedia PDF Downloads 233412 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects
Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town
Abstract:
The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry
Procedia PDF Downloads 92411 A Multi-Perspective, Qualitative Study into Quality of Life for Elderly People Living at Home and the Challenges for Professional Services in the Netherlands
Authors: Hennie Boeije, Renate Verkaik, Joke Korevaar
Abstract:
In Dutch national policy, it is promoted that the elderly remain living at home longer. They are less often admitted to a nursing home or only later in life. While living at home, it is important that they experience a good quality of life. Care providers in primary care support this. In this study, it was investigated what quality of life means for the elderly and which characteristics care should have that supports living at home longer with quality of life. To explore this topic, a qualitative methodology was used. Four focus groups were conducted: two with elderly people who live at home and their family caregivers, one with district nurses employed in-home care services and one with elderly care physicians working in primary care. Next to this individual interviews were employed with general practitioners (GPs). In total 32 participants took part in the study. The data were thematically analysed with MaxQDA software for qualitative analysis and reported. Quality of life is a multi-faceted term for elderly. The essence of their description is that they can still undertake activities that matter to them. Good physical health, mental well-being and social connections enable them to do this. Own control over their life is important for some. They are of opinion that how they experience life and manage old age is related to their resilience and coping. Key terms in the definitions of quality of life by GPs are also physical and mental health and social contacts. These are the three pillars. Next, to this elderly care, physicians mention security and safety and district nurses add control over their own life and meaningful daily activities. They agree that with frail elderly people, the balance is delicate and a change in one of the three pillars can cause it to collapse like a house of cards. When discussing what support is needed, professionals agree on access to care with a low threshold, prevention, and life course planning. When care is provided in a timely manner, a worsening of the situation can be prevented. They agree that hospital care often is not needed since most of the problems with the elderly have to do with care and security rather than with a cure per se. GPs can consult elderly care physicians to lower their workload and to bring in specific knowledge. District nurses often signal changes in the situation of the elderly. According to them, the elderly predominantly need someone to watch over them and provide them with a feeling of security. Life course planning and advance care planning can contribute to uniform treatment in line with older adults’ wishes. In conclusion, all stakeholders, including elderly persons, agree on what entails quality of life and the quality of care that is needed to support that. A future challenge is to shape conditions for the right skill mix of professionals, cooperation between the professions and breaking down differences in financing and supply. For the elderly, the challenge is preparing for aging.Keywords: elderly living at home, quality of life, quality of care, professional cooperation, life course planning, advance care planning
Procedia PDF Downloads 129410 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor Under Scour, and Anchor Transportation and Installation (T&I)
Authors: Vinay Kumar Vanjakula, Frank Adam
Abstract:
The generation of electricity through wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, the installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis of the oil and gas industry. For such a floating system, stabilization in harsh conditions is a challenging task. For that, a robust heavy-weight gravity anchor is needed. Transportation of such anchor requires a heavy vessel that increases the cost. To lower the cost, the gravity anchor is designed with ballast chambers that allow the anchor to float while towing and filled with water when lowering to the planned seabed location. The presence of such a large structure may influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes influence the installation process. Also, after installation and under operating conditions, the flow around the anchor may allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scouring on fixed structures (bridges and monopiles) in rivers and oceans have been carried out, and very limited research work on scouring around a bluff-shaped gravity anchor. The objective of this study involves the application of different numerical models to simulate the anchor towing under waves and calm water conditions. Anchor lowering involves the investigation of anchor movements at certain water depths under wave/current. The motions of anchor drift, heave, and pitch is of special focus. The further study involves anchor scour, where the anchor is installed in the seabed; the flow of underwater current around the anchor induces vortices mainly at the front and corners that develop soil erosion. The study of scouring on a submerged gravity anchor is an interesting research question since the flow not only passes around the anchor but also over the structure that forms different flow vortices. The achieved results and the numerical model will be a basis for the development of other designs and concepts for marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM and other similar software.Keywords: anchor lowering, anchor towing, gravity anchor, computational fluid dynamics, scour
Procedia PDF Downloads 170409 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images
Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso
Abstract:
Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence
Procedia PDF Downloads 20