Search results for: multi variable decision making
1709 Quantitative Polymerase Chain Reaction Analysis of Phytoplankton Composition and Abundance to Assess Eutrophication: A Multi-Year Study in Twelve Large Rivers across the United States
Authors: Chiqian Zhang, Kyle D. McIntosh, Nathan Sienkiewicz, Ian Struewing, Erin A. Stelzer, Jennifer L. Graham, Jingrang Lu
Abstract:
Phytoplankton plays an essential role in freshwater aquatic ecosystems and is the primary group synthesizing organic carbon and providing food sources or energy to ecosystems. Therefore, the identification and quantification of phytoplankton are important for estimating and assessing ecosystem productivity (carbon fixation), water quality, and eutrophication. Microscopy is the current gold standard for identifying and quantifying phytoplankton composition and abundance. However, microscopic analysis of phytoplankton is time-consuming, has a low sample throughput, and requires deep knowledge and rich experience in microbial morphology to implement. To improve this situation, quantitative polymerase chain reaction (qPCR) was considered for phytoplankton identification and quantification. Using qPCR to assess phytoplankton composition and abundance, however, has not been comprehensively evaluated. This study focused on: 1) conducting a comprehensive performance comparison of qPCR and microscopy techniques in identifying and quantifying phytoplankton and 2) examining the use of qPCR as a tool for assessing eutrophication. Twelve large rivers located throughout the United States were evaluated using data collected from 2017 to 2019 to understand the relation between qPCR-based phytoplankton abundance and eutrophication. This study revealed that temporal variation of phytoplankton abundance in the twelve rivers was limited within years (from late spring to late fall) and among different years (2017, 2018, and 2019). Midcontinent rivers had moderately greater phytoplankton abundance than eastern and western rivers, presumably because midcontinent rivers were more eutrophic. The study also showed that qPCR- and microscope-determined phytoplankton abundance had a significant positive linear correlation (adjusted R² 0.772, p-value < 0.001). In addition, phytoplankton abundance assessed via qPCR showed promise as an indicator of the eutrophication status of those rivers, with oligotrophic rivers having low phytoplankton abundance and eutrophic rivers having (relatively) high phytoplankton abundance. This study demonstrated that qPCR could serve as an alternative tool to traditional microscopy for phytoplankton quantification and eutrophication assessment in freshwater rivers.Keywords: phytoplankton, eutrophication, river, qPCR, microscopy, spatiotemporal variation
Procedia PDF Downloads 1051708 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam
Authors: Sahand Golmohammadi, Sana Hosseini Shirazi
Abstract:
Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel
Procedia PDF Downloads 761707 Wet Processing of Algae for Protein and Carbohydrate Recovery as Co-Product of Algal Oil
Authors: Sahil Kumar, Rajaram Ghadge, Ramesh Bhujade
Abstract:
Historically, lipid extraction from dried algal biomass remained a focus area of the algal research. It has been realized over the past few years that the lipid-centric approach and conversion technologies that require dry algal biomass have several challenges. Algal culture in cultivation systems contains more than 99% water, with algal concentrations of just a few hundred milligrams per liter ( < 0.05 wt%), which makes harvesting and drying energy intensive. Drying the algal biomass followed by extraction also entails the loss of water and nutrients. In view of these challenges, focus has shifted toward developing processes that will enable oil production from wet algal biomass without drying. Hydrothermal liquefaction (HTL), an emerging technology, is a thermo-chemical conversion process that converts wet biomass to oil and gas using water as a solvent at high temperature and high pressure. HTL processes wet algal slurry containing more than 80% water and significantly reduces the adverse cost impact owing to drying the algal biomass. HTL, being inherently feedstock agnostic, i.e., can convert carbohydrates and proteins also to fuels and recovers water and nutrients. It is most effective with low-lipid (10--30%) algal biomass, and bio-crude yield is two to four times higher than the lipid content in the feedstock. In the early 2010s, research remained focused on increasing the oil yield by optimizing the process conditions of HTL. However, various techno-economic studies showed that simply converting algal biomass to only oil does not make economic sense, particularly in view of low crude oil prices. Making the best use of every component of algae is a key for economic viability of algal to oil process. On investigation of HTL reactions at the molecular level, it has been observed that sequential HTL has the potential to recover value-added products along with biocrude and improve the overall economics of the process. This potential of sequential HTL makes it a most promising technology for converting wet waste to wealth. In this presentation, we will share our experience on the techno-economic and engineering aspects of sequential HTL for conversion of algal biomass to algal bio-oil and co-products.Keywords: algae, biomass, lipid, protein
Procedia PDF Downloads 2191706 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory
Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock
Abstract:
Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.Keywords: subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing
Procedia PDF Downloads 1341705 Experimental and Theoretical Characterization of Supramolecular Complexes between 7-(Diethylamino)Quinoline-2(1H)-One and Cucurbit[7] Uril
Authors: Kevin A. Droguett, Edwin G. Pérez, Denis Fuentealba, Margarita E. Aliaga, Angélica M. Fierro
Abstract:
Supramolecular chemistry is a field of growing interest. Moreover, studying the formation of host-guest complexes between macrocycles and dyes is highly attractive due to their potential applications. Examples of the above are drug delivery, catalytic process, and sensing, among others. There are different dyes of interest in the literature; one example is the quinolinone derivatives. Those molecules have good optical properties and chemical and thermal stability, making them suitable for developing fluorescent probes. Secondly, several macrocycles can be seen in the literature. One example is the cucurbiturils. This water-soluble macromolecule family has a hydrophobic cavity and two identical carbonyl portals. Additionally, the thermodynamic analysis of those supramolecular systems could help understand the affinity between the host and guest, their interaction, and the main stabilization energy of the complex. In this work, two 7-(diethylamino) quinoline-2 (1H)-one derivative (QD1-2) and their interaction with cucurbit[7]uril (CB[7]) were studied from an experimental and in-silico point of view. For the experimental section, the complexes showed a 1:1 stoichiometry by HRMS-ESI and isothermal titration calorimetry (ITC). The inclusion of the derivatives on the macrocycle lends to an upward shift in the fluorescence intensity, and the pKa value of QD1-2 exhibits almost no variation after the formation of the complex. The thermodynamics of the inclusion complexes was investigated using ITC; the results demonstrate a non-classical hydrophobic effect with a minimum contribution from the entropy term and a constant binding on the order of 106 for both ligands. Additionally, dynamic molecular studies were carried out during 300 ns in an explicit solvent at NTP conditions. Our finding shows that the complex remains stable during the simulation (RMSD ~1 Å), and hydrogen bonds contribute to the stabilization of the systems. Finally, thermodynamic parameters from MMPBSA calculations were obtained to generate new computational insights to compare with experimental results.Keywords: host-guest complexes, molecular dynamics, quinolin-2(1H)-one derivatives dyes, thermodynamics
Procedia PDF Downloads 951704 Transformation of the Institutionality of International Cooperation in Ecuador from 2007 to 2017: 2017: A Case of State Identity Affirmation through Role Performance
Authors: Natalia Carolina Encalada Castillo
Abstract:
As part of an intended radical policy change compared to former administrations in Ecuador, the transformation of the institutionality of international cooperation during the period of President Rafael Correa was considered as a key element for the construction of the state of 'Good Living'. This intention led to several regulatory changes in the reception of cooperation for development, and even the departure of some foreign cooperation agencies. Moreover, Ecuador launched the initiative to become a donor of cooperation towards other developing countries through the ‘South-South Cooperation’ approach. All these changes were institutionalized through the Ecuadorian System of International Cooperation as a new framework to establish rules and policies that guarantee a sovereign management of foreign aid. Therefore, this research project has been guided by two questions: What were the factors that motivated the transformation of the institutionality of international cooperation in Ecuador from 2007 to 2017? and, what were the implications of this transformation in terms of the international role of the country? This paper seeks to answer these questions through Role Theory within a Constructivist meta-theoretical perspective, considering that in this case, changes at the institutional level in the field of cooperation, responded not only to material motivations but also to interests built on the basis of a specific state identity. The latter was only possible to affirm through specific roles such as ‘sovereign recipient of cooperation’ as well as ‘donor of international cooperation’. However, the performance of these roles was problematic as they were not easily accepted by the other actors in the international arena or in the domestic level. In terms of methodology, these dynamics are analyzed in a qualitative way mainly through interpretive analysis of the discourse of high-level decision-makers from Ecuador and other cooperation actors. Complementary to this, document-based research of relevant information as well as interviews have been conducted. Finally, it is concluded that even if material factors such as infrastructure needs, trade and investment interests, as well as reinforcement of state control and monitoring of cooperation flows, motivated the institutional transformation of international cooperation in Ecuador; the essential basis of these changes was the search for a new identity for the country to be projected in the international arena. This identity started to be built but continues to be unstable. Therefore, it is important to potentiate the achievements of the new international cooperation policies, and review their weaknesses, so that non-reimbursable cooperation funds received as well as ‘South-South cooperation’ actions, contribute effectively to national objectives.Keywords: Ecuador, international cooperation, Role Theory, state identity
Procedia PDF Downloads 2201703 The Dressing Field Method of Gauge Symmetries Reduction: Presentation and Examples
Authors: Jeremy Attard, Jordan François, Serge Lazzarini, Thierry Masson
Abstract:
Gauge theories are the natural background for describing geometrically fundamental interactions using principal and associated fiber bundles as dynamical entities. The central notion of these theories is their local gauge symmetry implemented by the local action of a Lie group H. There exist several methods used to reduce the symmetry of a gauge theory, like gauge fixing, bundle reduction theorem or spontaneous symmetry breaking mechanism (SSBM). This paper is a presentation of another method of gauge symmetry reduction, distinct from those three. Given a symmetry group H acting on a fiber bundle and its naturally associated fields (Ehresmann (or Cartan) connection, curvature, matter fields, etc.) there sometimes exists a way to erase (in whole or in part) the H-action by just reconfiguring these fields, i.e. by making a mere change of field variables in order to get new (‘composite‘) fields on which H (in whole or in part) does not act anymore. Two examples: the re-interpretation of the BEHGHK (Higgs) mechanism, on the one hand, and the top-down construction of Tractor and Penrose's Twistor spaces and connections in the framework of conformal Cartan geometry, one the other, will be discussed. They have, of course, nothing to do with each other but the dressing field method can be applied on both to get a new insight. In the first example, it turns out, indeed, that generation of masses in the Standard Model can be separated from the symmetry breaking, the latter being a mere change of field variables, i.e. a dressing. This offers an interpretation in opposition with the one usually found in textbooks. In the second case, the dressing field method applied to the conformal Cartan geometry offer a way of understanding the deep geometric nature of the so-called Tractors and Twistors. The dressing field method, distinct from a gauge transformation (even if it can have apparently the same form), is a systematic way of finding and erasing artificial symmetries of a theory, by a mere change of field variables which redistributes the degrees of freedom of the theories.Keywords: BEHGHK (Higgs) mechanism, conformal gravity, gauge theory, spontaneous symmetry breaking, symmetry reduction, twistors and tractors
Procedia PDF Downloads 2411702 Aerosol Direct Radiative Forcing Over the Indian Subcontinent: A Comparative Analysis from the Satellite Observation and Radiative Transfer Model
Authors: Shreya Srivastava, Sagnik Dey
Abstract:
Aerosol direct radiative forcing (ADRF) refers to the alteration of the Earth's energy balance from the scattering and absorption of solar radiation by aerosol particles. India experiences substantial ADRF due to high aerosol loading from various sources. These aerosols' radiative impact depends on their physical characteristics (such as size, shape, and composition) and atmospheric distribution. Quantifying ADRF is crucial for understanding aerosols’ impact on the regional climate and the Earth's radiative budget. In this study, we have taken radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 22 years (2000-2021) over the Indian subcontinent. Except for a few locations, the short-wave DARF exhibits aerosol cooling at the TOA (values ranging from +2.5 W/m2 to -22.5W/m2). Cooling due to aerosols is more pronounced in the absence of clouds. Being an aerosol hotspot, higher negative ADRF is observed over the Indo-Gangetic Plain (IGP). Aerosol Forcing Efficiency (AFE) shows a decreasing seasonal trend in winter (DJF) over the entire study region while an increasing trend over IGP and western south India during the post-monsoon season (SON) in clear-sky conditions. Analysing atmospheric heating and AOD trends, we found that only the aerosol loading is not governing the change in atmospheric heating but also the aerosol composition and/or their vertical profile. We used a Multi-angle Imaging Spectro-Radiometer (MISR) Level-2 Version 23 aerosol products to look into aerosol composition. MISR incorporates 74 aerosol mixtures in its retrieval algorithm based on size, shape, and absorbing properties. This aerosol mixture information was used for analysing long-term changes in aerosol composition and dominating aerosol species corresponding to the aerosol forcing value. Further, ADRF derived from this method is compared with around 35 studies across India, where a plane parallel Radiative transfer model was used, and the model inputs were taken from the OPAC (Optical Properties of Aerosols and Clouds) utilizing only limited aerosol parameter measurements. The result shows a large overestimation of TOA warming by the latter (i.e., Model-based method).Keywords: aerosol radiative forcing (ARF), aerosol composition, MISR, CERES, SBDART
Procedia PDF Downloads 601701 Teaching Method for a Classroom of Students at Different Language Proficiency Levels: Content and Language Integrated Learning in a Japanese Culture Classroom
Authors: Yukiko Fujiwara
Abstract:
As a language learning methodology, Content and Language Integrated Learning (CLIL) has become increasingly prevalent in Japan. Most CLIL classroom practice and its research are conducted in EFL fields. However, much less research has been done in the Japanese language learning setting. Therefore, there are still many issues to work out using CLIL in the Japanese language teaching (JLT) setting. it is expected that more research will be conducted on both authentically and academically. Under such circumstances, this is one of the few classroom-based CLIL researches experiments in JLT and aims to find an effective course design for a class with students at different proficiency levels. The class was called ‘Japanese culture A’. This class was offered as one of the elective classes for International exchange students at a Japanese university. The Japanese proficiency level of the class was above the Japanese Language Proficiency Test Level N3. Since the CLIL approach places importance on ‘authenticity’, the class was designed with materials and activities; such as books, magazines, a film and TV show and a field trip to Kyoto. On the field trip, students experienced making traditional Japanese desserts, by receiving guidance directly from a Japanese artisan. Through the course, designated task sheets were used so the teacher could get feedback from each student to grasp what the class proficiency gap was. After reading an article on Japanese culture, students were asked to write down the words they did not understand and what they thought they needed to learn. It helped both students and teachers to set learning goals and work together for it. Using questionnaires and interviews with students, this research examined whether the attempt was effective or not. Essays they wrote in class were also analyzed. The results from the students were positive. They were motivated by learning authentic, natural Japanese, and they thrived setting their own personal goals. Some students were motivated to learn Japanese by studying the language and others were motivated by studying the cultural context. Most of them said they learned better this way; by setting their own Japanese language and culture goals. These results will provide teachers with new insight towards designing class materials and activities that support students in a multilevel CLIL class.Keywords: authenticity, CLIL, Japanese language and culture, multilevel class
Procedia PDF Downloads 2541700 Fault Tolerant and Testable Designs of Reversible Sequential Building Blocks
Authors: Vishal Pareek, Shubham Gupta, Sushil Chandra Jain
Abstract:
With increasing high-speed computation demand the power consumption, heat dissipation and chip size issues are posing challenges for logic design with conventional technologies. Recovery of bit loss and bit errors is other issues that require reversibility and fault tolerance in the computation. The reversible computing is emerging as an alternative to conventional technologies to overcome the above problems and helpful in a diverse area such as low-power design, nanotechnology, quantum computing. Bit loss issue can be solved through unique input-output mapping which require reversibility and bit error issue require the capability of fault tolerance in design. In order to incorporate reversibility a number of combinational reversible logic based circuits have been developed. However, very few sequential reversible circuits have been reported in the literature. To make the circuit fault tolerant, a number of fault model and test approaches have been proposed for reversible logic. In this paper, we have attempted to incorporate fault tolerance in sequential reversible building blocks such as D flip-flop, T flip-flop, JK flip-flop, R-S flip-flop, Master-Slave D flip-flop, and double edge triggered D flip-flop by making them parity preserving. The importance of this proposed work lies in the fact that it provides the design of reversible sequential circuits completely testable for any stuck-at fault and single bit fault. In our opinion our design of reversible building blocks is superior to existing designs in term of quantum cost, hardware complexity, constant input, garbage output, number of gates and design of online testable D flip-flop have been proposed for the first time. We hope our work can be extended for building complex reversible sequential circuits.Keywords: parity preserving gate, quantum computing, fault tolerance, flip-flop, sequential reversible logic
Procedia PDF Downloads 5481699 The Red Persian Carpet: Iran as Semi-Periphery in China's Belt and Road Initiative-Bound World-System
Authors: Toufic Sarieddine
Abstract:
As the belt and road Initiative (henceforth, BRI) enters its 9th year, Iran and China are forging stronger ties on economic and military fronts, a development which has not only caused alarm in Washington but also risks staining China’s relationships with the oil-rich Gulf monarchies. World-systems theory has been used to examine the impact of the BRI on the current world order, with scholarship split on the capacity of China to emerge as a hegemon contending with the US or even usurping it. This paper argues the emergence of a new China-centered world-system comprised of states/areas and processes participating in the BRI and overlapping with the global world-system under (shaky) US hegemony. This world-system centers around China as core and hegemon via economic domination, capable new institutions (Shanghai Cooperation Council), legal modi operandi, the common goal of infrastructure development to rally support among developing states, and other indicators of hegemony outlined in world-systems theory. In this regard, while states like Pakistan could become peripheries to China in the BRI-bound world-system via large-scale projects such as the China-Pakistan Economic Corridor, Iran has greater capacities and influence in the Middle East, making it superior to a periphery. This paper thus argues that the increasing proximity between Iran and China sees the former becoming a semi-periphery with respect to China within the BRI-bound world-system, having economic dependence on its new core and hegemon while simultaneously wielding political and military influence on weaker states such as Iraq, Lebanon, Yemen, and Syria. The indicators for peripheralization as well as the characteristics of a semi-periphery outlined in world-systems theory are used to examine the current economic, political, and militaristic dimensions of Iran and China’s growing relationship, as well as the trajectory of these dimensions as part of the BRI-bound world-system.Keywords: belt and road initiative, China, China-Middle East relations, Iran, world-systems analysis
Procedia PDF Downloads 1591698 Process Modeling in an Aeronautics Context
Authors: Sophie Lemoussu, Jean-Charles Chaudemar, Robertus A. Vingerhoeds
Abstract:
Many innovative projects exist in the field of aeronautics, each addressing specific areas so to reduce weight, increase autonomy, reduction of CO2, etc. In many cases, such innovative developments are being carried out by very small enterprises (VSE’s) or small and medium sized-enterprises (SME’s). A good example concerns airships that are being studied as a real alternative to passenger and cargo transportation. Today, no international regulations propose a precise and sufficiently detailed framework for the development and certification of airships. The absence of such a regulatory framework requires a very close contact with regulatory instances. However, VSE’s/SME’s do not always have sufficient resources and internal knowledge to handle this complexity and to discuss these issues. This poses an additional challenge for those VSE’s/SME’s, in particular those that have system integration responsibilities and that must provide all the necessary evidence to demonstrate their ability to design, produce, and operate airships with the expected level of safety and reliability. The main objective of this research is to provide a methodological framework enabling VSE’s/SME’s with limited resources to organize the development of airships while taking into account the constraints of safety, cost, time and performance. This paper proposes to provide a contribution to this problematic by proposing a Model-Based Systems Engineering approach. Through a comprehensive process modeling approach applied to the development processes, the regulatory constraints, existing best practices, etc., a good image can be obtained as to the process landscape that may influence the development of airships. To this effect, not only the necessary regulatory information is taken on board, also other international standards and norms on systems engineering and project management are being modeled and taken into account. In a next step, the model can be used for analysis of the specific situation for given developments, derive critical paths for the development, identify eventual conflicting aspects between the norms, standards, and regulatory expectations, or also identify those areas where not enough information is available. Once critical paths are known, optimization approaches can be used and decision support techniques can be applied so to better support VSE’s/SME’s in their innovative developments. This paper reports on the adopted modeling approach, the retained modeling languages, and how they all fit together.Keywords: aeronautics, certification, process modeling, project management, regulation, SME, systems engineering, VSE
Procedia PDF Downloads 1651697 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model
Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung
Abstract:
The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation
Procedia PDF Downloads 1731696 Hierarchy and Weight of Influence Factors on Labor Productivity in the Construction Industry of the Nepal
Authors: Shraddha Palikhe, Sunkuk Kim
Abstract:
The construction industry is the most labor intensive in Nepal. It is obvious that construction is a major sector and any productivity enhancement activity in this sector will have a positive impact in the overall improvement of the national economy. Previous studies have stated that Nepal has poor labor productivity among other south Asian countries. Though considerable research has been done on productivity factors in other countries, no study has addressed labor productivity issues in Nepal. Therefore, the main objective of this study is to identify and hierarchy the influence factors for poor labor productivity. In this study, a questionnaire approach is chosen as a method of the survey from thirty experts involved in the construction industry, such as Architects, Civil Engineers, Project Engineers and Site Engineers. A survey was conducted in Nepal, to identify the major factors impacting construction labor productivity. Analytic Hierarchy Process (AHP) analysis method was used to understand the underlying relationships among the factors, categorized into five groups, namely (1) Labor-management group; (2) Material management group; (3) Human labor group; (4) Technological group and (5) External group and was divided into 33 subfactors. AHP was used to establish the relative importance of the criteria. The AHP makes pairwise comparisons of relative importance between hierarchy elements grouped by labor productivity decision criteria. Respondents were asked to answer based on their experience of construction works. On the basis of the respondent’s response, weight of all the factors were calculated and ranked it. The AHP results were tabulated based on weight and ranking of influence factors. AHP model consists of five main criteria and 33 sub-criteria. Among five main criteria, the scenario assigns a weight of highest influential factor i.e. 26.15% to human labor group followed by 23.01% to technological group, 22.97% to labor management group, 17.61% material management group and 10.25% to external group. While in 33 sub-criteria, the most influential factor for poor productivity in Nepal are lack of monetary incentive (20.53%) for human labor group, unsafe working condition (17.55%) for technological group, lack of leadership (18.43%) for labor management group, unavailability of tools at site (25.03%) for material management group and strikes (35.01%) for external group. The results show that AHP model associated criteria are helpful to predict the current situation of labor productivity. It is essential to consider these influence factors to improve the labor productivity in the construction industry of Nepal.Keywords: construction, hierarchical analysis, influence factors, labor productivity
Procedia PDF Downloads 4061695 Determinants of Household Food Security in Addis Ababa City Administration
Authors: Estibe Dagne Mekonnen
Abstract:
In recent years, the prevalence of undernourishment was 30 percent for sub-Saharan Africa, compared with 16 percent for Asia and the Pacific (Ali, 2011). In Ethiopia, almost 40 percent of the total population in the country and 57 percent of Addis Ababa population lives below the international poverty line of US$ 1.25 per day (UNICEF, 2009). This study aims to analyze the determinant of household food secrity in Addis Ababa city administration. Primary data were collected from a survey of 256 households in the selected sub-city, namely Addis Ketema, Arada, and Kolfe Keranio, in the year 2022. Both Purposive and multi-stage cluster random sampling procedures were employed to select study areas and respondents. Descriptive statistics and order logistic regression model were used to test the formulated hypotheses. The result reveals that out of the total sampled households, 25% them were food secured, 13% were mildly food insecure, 26% were moderately food insecure and 36% were severely food insecure. The study indicates that household family size, house ownership, household income, household food source, household asset possession, household awareness on inflation, household access to social protection program, household access to credit and saving and household access to training and supervision on food security have a positive and significant effect on the likelihood of household food security status. However, marital status of household head, employment sector of household head, dependency ratio and household’s nonfood expenditure has a negative and significant influence on household food security status. The study finally suggests that the government in collaboration with financial institutions and NGO should work on sustaining household food security by creating awareness, providing credit, facilitate rural-urban linkage between producer and consumer and work on urban infrastructure improvement. Moreover, the governments also work closely and monitor consumer good suppliers, if possible find a way to subsidize consumable goods to more insecure households and make them to be food secured. Last but not least, keeping this country’s peace will play a crucial role to sustain food security.Keywords: determinants, household, food security, order logit model, Addis Ababa
Procedia PDF Downloads 781694 Post Harvest Losses and Food Security in Northeast Nigeria What Are the Key Challenges and Concrete Solutions
Authors: Adebola Adedugbe
Abstract:
The challenge of post-harvest losses poses serious threats for food security in Nigeria and the north-eastern part with the country losing about $9billion annually due to postharvest losses in the sector. Post-harvest loss (PHL) is the quantitative and qualitative loss of food in various post-harvest operations. In Nigeria, post-harvest losses (PHL) have been a major challenge to food security and improved farmer’s income. In 2022, the Nigerian government had said over 30 percent of food produced by Nigerian farmers perish during post-harvest. For many in northeast Nigeria, agriculture is the predominant source of livelihood and income. The persistent communal conflicts, flood, decade-old attacks by boko haram and insurgency in this region have disrupted farming activities drastically, with farmlands becoming insecure and inaccessible as communities are forced to abandon ancestral homes, The impact of climate change is also affecting agricultural and fishing activities, leading to shortage of food supplies, acute hunger and loss of livelihood. This has continued to impact negatively on the region and country’s food production and availability making it loose billions of US dollars annually in income in this sector. The root cause of postharvest losses among others in crops, livestock and fisheries are lack of modern post-harvest equipment, chemical and lack of technologies used for combating losses. The 2019 Global Hunger Index showed Nigeria’s case was progressing from a ‘serious to alarming level’. As part of measures to address the problem of post-harvest losses experienced by farmers, the federal government of Nigeria concessioned 17 silos with 6000 metric tonne storage space to private sector to enable farmers to have access to storage facilities. This paper discusses the causes, effects and solutions in handling post-harvest losses and optimize returns on food security in northeast Nigeria.Keywords: farmers, food security, northeast Nigeria, postharvest loss
Procedia PDF Downloads 781693 Interplay of Physical Activity, Hypoglycemia, and Psychological Factors: A Longitudinal Analysis in Diabetic Youth
Authors: Georges Jabbour
Abstract:
Background and aims: This two-year follow-up study explores the long-term sustainability of physical activity (PA) levels in young people with type 1 diabetes, focusing on the relationship between PA, hypoglycemia, and behavioral scores. The literature highlights the importance of PA and its health benefits, as well as the barriers to engaging in PA practices. Studies have shown that individuals with high levels of vigorous physical activity have higher fear of hypoglycemia (FOH) scores and more hypoglycemia episodes. Considering that hypoglycemia episodes are a major barrier to physical activity, and many studies reported a negative association between PA and high FOH scores, it cannot be guaranteed that those experiencing hypoglycemia over a long period will remain active. Building on that, the present work assesses whether high PA levels, despite elevated hypoglycemia risk, can be maintained over time. The study tracks PA levels at one and two years, correlating them with hypoglycemia instances and Fear of Hypoglycemia (FOH) scores. Materials and methods: A self-administered questionnaire was completed by 61 youth with T1D, and their PA was assessed. Hypoglycemia episodes, fear of hypoglycemia scores and HbA1C levels were collected. All assessments were realized at baseline (visit 0: V0), one year (V1) and two years later (V2). For the purpose of the present work, we explore the relationships between PA levels, hypoglycemia episodes, and FOH scores at each time point. We used multiple linear regression to model the mean outcomes for each exposure of interest. Results: Findings indicate no changes in total moderate to vigorous PA (MVPA) and VPA levels among visits, and HbA1c (%) was negatively correlated with the total amount of VPA per day in minutes (β= -0.44; p=0.01, β= -0.37; p=0.04, and β= -0.66; p=0.01 for V0, V1, and V2, respectively). Our linear regression model reported a significant negative correlation between VPA and FOH across the visits (β=-0.59, p=0.01; β= -0.44, p=0.01; and β= -0.34, p=0.03 for V0, V1, and V2, respectively), and HbA1c (%) was influenced by both the number of hypoglycemic episodes and FOH score at V2 (β=0.48; p=0.02 and β=0.38; p=0.03, respectively). Conclusion: The sustainability of PA levels and HbA1c (%) in young individuals with type 1 diabetes is influenced by various factors, including fear of hypoglycemia. Understanding these complex interactions is essential for developing effective interventions to promote sustained PA levels in this population. Our results underline the necessity of a multi-strategic approach to promoting active lifestyles among diabetic youths. This approach should synergize PA enhancement with vigilant glucose monitoring and effective FOH management.Keywords: physical activity, hypoglycemia, fear of hypoglycemia, youth
Procedia PDF Downloads 341692 Geochemical Evolution of Microgranular Enclaves Hosted in Cambro-Ordovician Kyrdem Granitoids, Meghalaya Plateau, Northeast India
Authors: K. Mohon Singh
Abstract:
Cambro-Ordovician (512.5 ± 8.7 Ma) felsic magmatism in the Kyrdem region of Meghalaya plateau, herewith referred to as Kyrdem granitoids (KG), intrudes the low-grade Shillong Group of metasediments and Precambrian Basement Gneissic complex forming an oval-shaped plutonic body with longer axis almost trending N-S. Thermal aureole is poorly developed or covered under the alluvium. KG exhibit very coarse grained porphyritic texture with abundant K-feldspar megacrysts (up to 9cm long) and subordinate amount of amphibole, biotite, plagioclase, and quartz. The size of K-feldspar megacrysts increases from margin (Dwarksuid) to the interior (Kyrdem) of the KG pluton. Late felsic pulses as fine grained granite, leucocratic (aplite), and pegmatite veins intrude the KG at several places. Grey and pink varieties of KG can be recognized, but pink colour of KG is the result of post-magmatic fluids, which have not affected the magnetic properties of KG. Modal composition of KG corresponds to quartz monzonite, monzogranite, and granodiorite. KG has been geochemically characterized as metaluminous (I-type) to peraluminous (S-type) granitoids. The KG is characterized by development of variable attitude of primary foliations mostly marked along the margin of the pluton and is located at the proximity of Tyrsad-Barapani lineament. The KG contains country rock xenoliths (amphibolite, gneiss, schist, etc.) which are mostly confined to the margin of the pluton, and microgranular enclaves (ME) are hosted in the porphyritic variety of KG. Microgranular Enclaves (ME) in Kyrdem Granitoids are fine- to medium grained, mesocratic to melanocratic, phenocryst bearing or phenocryst-free, rounded to ellipsoidal showing typical magmatic textures. Mafic-felsic phenocrysts in ME are partially corroded and dissolved because of their involvement in magma-mixing event, and thus represent xenocrysts. Sharp to diffused contacts of ME with host Kyrdem Granitoids, fine grained nature and presence of acicular apatite in ME suggest comingling and undercooling of coeval, semi-solidified ME magma within partly crystalline felsic host magma. Geochemical features recognize the nature of ME (molar A/CNK=0.76-1.42) and KG (molar A/CNK =0.41-1.75) similar to hybrid-type formed by mixing of mantle-derived mafic and crustal-derived felsic magmas. Major and trace including rare earth elements variations of ME suggest the involvement of combined processes such as magma mixing, mingling and crystallization differentiation in the evolution of ME but KG variations appear primarily controlled by fractionation of plagioclase, hornblende biotite, and accessory phases. Most ME are partially to nearly re-equilibrate chemically with felsic host KG during magma mixing and mingling processes.Keywords: geochemistry, Kyrdem Granitoids, microgranular enclaves, Northeast India
Procedia PDF Downloads 1221691 Classification of Emotions in Emergency Call Center Conversations
Authors: Magdalena Igras, Joanna Grzybowska, Mariusz Ziółko
Abstract:
The study of emotions expressed in emergency phone call is presented, covering both statistical analysis of emotions configurations and an attempt to automatically classify emotions. An emergency call is a situation usually accompanied by intense, authentic emotions. They influence (and may inhibit) the communication between caller and responder. In order to support responders in their responsible and psychically exhaustive work, we studied when and in which combinations emotions appeared in calls. A corpus of 45 hours of conversations (about 3300 calls) from emergency call center was collected. Each recording was manually tagged with labels of emotions valence (positive, negative or neutral), type (sadness, tiredness, anxiety, surprise, stress, anger, fury, calm, relief, compassion, satisfaction, amusement, joy) and arousal (weak, typical, varying, high) on the basis of perceptual judgment of two annotators. As we concluded, basic emotions tend to appear in specific configurations depending on the overall situational context and attitude of speaker. After performing statistical analysis we distinguished four main types of emotional behavior of callers: worry/helplessness (sadness, tiredness, compassion), alarm (anxiety, intense stress), mistake or neutral request for information (calm, surprise, sometimes with amusement) and pretension/insisting (anger, fury). The frequency of profiles was respectively: 51%, 21%, 18% and 8% of recordings. A model of presenting the complex emotional profiles on the two-dimensional (tension-insecurity) plane was introduced. In the stage of acoustic analysis, a set of prosodic parameters, as well as Mel-Frequency Cepstral Coefficients (MFCC) were used. Using these parameters, complex emotional states were modeled with machine learning techniques including Gaussian mixture models, decision trees and discriminant analysis. Results of classification with several methods will be presented and compared with the state of the art results obtained for classification of basic emotions. Future work will include optimization of the algorithm to perform in real time in order to track changes of emotions during a conversation.Keywords: acoustic analysis, complex emotions, emotion recognition, machine learning
Procedia PDF Downloads 4011690 Statecraft: Building a Hindu Nationalist Intellectual Ecosystem in India
Authors: Anuradha Sajjanhar
Abstract:
The rise of authoritarian populist regimes has been accompanied by hardened nationalism and heightened divisions between 'us' and 'them'. Political actors reinforce these sentiments through coercion, but also through inciting fear about imagined threats and by transforming public discourse about policy concerns. Extremist ideas can penetrate national policy, as newly appointed intellectuals and 'experts' in knowledge-producing institutions, such as government committees, universities, and think tanks, succeed in transforming public discourse. While attacking left and liberal academics, universities, and the press, the current Indian government is building new institutions to provide authority to its particularly rigid, nationalist discourse. This paper examines the building of a Hindu-nationalist intellectual ecosystem in India, interrogating the key role of hyper-nationalist think tanks. While some are explicit about their political and ideological leanings, others claim neutrality and pursue their agenda through coded technocratic language and resonant historical narratives. Their key is to change thinking by normalizing it. Six years before winning the election in 2014, India’s Hindu-nationalist party, the BJP, put together its own network of elite policy experts. In a national newspaper, the vice-president of the BJP described this as an intentional shift: from 'being action-oriented to solidifying its ideological underpinnings in a policy framework'. When the BJP came to power in 2014, 'experts' from these think tanks filled key positions in the central government. The BJP has since been circulating dominant ideas of Hindu supremacy through regional parties, grassroots political organisations, and civil society organisations. These think tanks have the authority to articulate and legitimate Hindu nationalism within a credible technocratic policy framework. This paper is based on ethnography and over 50 interviews in New Delhi, before and after the BJP’s staggering election victory in 2019. It outlines the party’s attempt to take over existing institutions while developing its own cadre of nationalist policy-making professionals.Keywords: ideology, politics, South Asia, technocracy
Procedia PDF Downloads 1241689 Translanguaging as a Decolonial Move in South African Bilingual Classrooms
Authors: Malephole Philomena Sefotho
Abstract:
Nowadays, it is a fact that the majority of people, worldwide, are bilingual rather than monolingual due to the surge of globalisation and mobility. Consequently, bilingual education is a topical issue of discussion among researchers. Several studies that have focussed on it have highlighted the importance and need for incorporating learners’ linguistic repertoires in multilingual classrooms and move away from the colonial approach which is a monolingual bias – one language at a time. Researchers pointed out that a systematic approach that involves the concurrent use of languages and not a separation of languages must be implemented in bilingual classroom settings. Translanguaging emerged as a systematic approach that assists learners to make meaning of their world and it involves allowing learners to utilize all their linguistic resources in their classrooms. The South African language policy also room for diverse languages use in bi/multilingual classrooms. This study, therefore, sought to explore how teachers apply translanguaging in bilingual classrooms in incorporating learners’ linguistic repertoires. It further establishes teachers’ perspectives in the use of more than one language in teaching and learning. The participants for this study were language teachers who teach at bilingual primary schools in Johannesburg in South Africa. Semi-structured interviews were conducted to establish their perceptions on the concurrent use of languages. Qualitative research design was followed in analysing data. The findings showed that teachers were reluctant to allow translanguaging to take place in their classrooms even though they realise the importance thereof. Not allowing bilingual learners to use their linguistic repertoires has resulted in learners’ negative attitude towards their languages and contributed in learners’ loss of their identity. This article, thus recommends a drastic change to decolonised approaches in teaching and learning in multilingual settings and translanguaging as a decolonial move where learners are allowed to translanguage freely in their classroom settings for better comprehension and making meaning of concepts and/or related ideas. It further proposes continuous conversations be encouraged to bring eminent cultural and linguistic genocide to a halt.Keywords: bilingualism, decolonisation, linguistic repertoires, translanguaging
Procedia PDF Downloads 1851688 CICAP: Promising Wound Healing Gel from Bee Products and Medicinal Plants
Authors: Laïd Boukraâ
Abstract:
Complementary and Alternative Medicine is an inclusive term that describes treatments, therapies, and modalities that are not accepted as components of mainstream education or practice, but that are performed on patients by some practitioners. While these treatments and therapies often form part of post-graduate education, study and writing, they are generally viewed as alternatives or complementary to more universally accepted treatments. Ancient civilizations used bee products and medicinal plants, but modern civilization and ‘education’ have seriously lessened our natural instinctive ability and capability. Despite the fact that the modern Western establishment appears to like to relegate apitherapy and aromatherapy to the status of 'folklore' or 'old wives' tales', they contain a vast spread of pharmacologically-active ingredients and each one has its own unique combination and properties. They are classified in modern herbal medicine according to their spheres of action. Bee products and medicinal plants are well-known natural product for their healing properties and their increasing popularity recently as they are widely used in wound healing. Honey not only has antibacterial properties which can help as an antibacterial agent but also has chemical properties which may further help in the wound healing process. A formulation with honey as its main component was produced into a honey gel. This new formulation has enhanced texture and is more user friendly for usage as well. This new formulation would be better than other formulas as it is hundred percent consisting of natural products and has been made into a better formulation. In vitro assay, animal model study and clinical trials have shown the effectiveness of LEADERMAX for the treatment of diabetic foot, burns, leg ulcer and bed sores. This one hundred percent natural product could be the best alternative to conventional products for wound and burn management. The advantages of the formulation are: 100% natural, affordable, easy to use, strong power of absorption, dry surface on the wound making a film, will not stick to the wound bed; helps relieve wound pain, inflammation, edema and bruising while improving comfort.Keywords: bed sore bee products, burns, diabetic foot, medicinal plants, leg ulcer, wounds
Procedia PDF Downloads 3401687 Brain-Computer Interfaces That Use Electroencephalography
Authors: Arda Ozkurt, Ozlem Bozkurt
Abstract:
Brain-computer interfaces (BCIs) are devices that output commands by interpreting the data collected from the brain. Electroencephalography (EEG) is a non-invasive method to measure the brain's electrical activity. Since it was invented by Hans Berger in 1929, it has led to many neurological discoveries and has become one of the essential components of non-invasive measuring methods. Despite the fact that it has a low spatial resolution -meaning it is able to detect when a group of neurons fires at the same time-, it is a non-invasive method, making it easy to use without possessing any risks. In EEG, electrodes are placed on the scalp, and the voltage difference between a minimum of two electrodes is recorded, which is then used to accomplish the intended task. The recordings of EEGs include, but are not limited to, the currents along dendrites from synapses to the soma, the action potentials along the axons connecting neurons, and the currents through the synaptic clefts connecting axons with dendrites. However, there are some sources of noise that may affect the reliability of the EEG signals as it is a non-invasive method. For instance, the noise from the EEG equipment, the leads, and the signals coming from the subject -such as the activity of the heart or muscle movements- affect the signals detected by the electrodes of the EEG. However, new techniques have been developed to differentiate between those signals and the intended ones. Furthermore, an EEG device is not enough to analyze the data from the brain to be used by the BCI implication. Because the EEG signal is very complex, to analyze it, artificial intelligence algorithms are required. These algorithms convert complex data into meaningful and useful information for neuroscientists to use the data to design BCI devices. Even though for neurological diseases which require highly precise data, invasive BCIs are needed; non-invasive BCIs - such as EEGs - are used in many cases to help disabled people's lives or even to ease people's lives by helping them with basic tasks. For example, EEG is used to detect before a seizure occurs in epilepsy patients, which can then prevent the seizure with the help of a BCI device. Overall, EEG is a commonly used non-invasive BCI technique that has helped develop BCIs and will continue to be used to detect data to ease people's lives as more BCI techniques will be developed in the future.Keywords: BCI, EEG, non-invasive, spatial resolution
Procedia PDF Downloads 781686 Artificial Habitat Mapping in Adriatic Sea
Authors: Annalisa Gaetani, Anna Nora Tassetti, Gianna Fabi
Abstract:
The hydroacoustic technology is an efficient tool to study the sea environment: the most recent advancement in artificial habitat mapping involves acoustic systems to investigate fish abundance, distribution and behavior in specific areas. Along with a detailed high-coverage bathymetric mapping of the seabed, the high-frequency Multibeam Echosounder (MBES) offers the potential of detecting fine-scale distribution of fish aggregation, combining its ability to detect at the same time the seafloor and the water column. Surveying fish schools distribution around artificial structures, MBES allows to evaluate how their presence modifies the biological natural habitat overtime in terms of fish attraction and abundance. In the last years, artificial habitat mapping experiences have been carried out by CNR-ISMAR in the Adriatic sea: fish assemblages aggregating at offshore gas platforms and artificial reefs have been systematically monitored employing different kinds of methodologies. This work focuses on two case studies: a gas extraction platform founded at 80 meters of depth in the central Adriatic sea, 30 miles far from the coast of Ancona, and the concrete and steel artificial reef of Senigallia, deployed by CNR-ISMAR about 1.2 miles offshore at a depth of 11.2 m . Relating the MBES data (metrical dimensions of fish assemblages, shape, depth, density etc.) with the results coming from other methodologies, such as experimental fishing surveys and underwater video camera, it has been possible to investigate the biological assemblage attracted by artificial structures hypothesizing which species populate the investigated area and their spatial dislocation from these artificial structures. Processing MBES bathymetric and water column data, 3D virtual scenes of the artificial habitats have been created, receiving an intuitive-looking depiction of their state and allowing overtime to evaluate their change in terms of dimensional characteristics and depth fish schools’ disposition. These MBES surveys play a leading part in the general multi-year programs carried out by CNR-ISMAR with the aim to assess potential biological changes linked to human activities on.Keywords: artificial habitat mapping, fish assemblages, hydroacustic technology, multibeam echosounder
Procedia PDF Downloads 2611685 The Role and Tasks of a Social Worker in the Care of a Terminally Ill Child with Regard to the Malopolska Hospice for Children
Authors: Ewelina Zdebska
Abstract:
A social worker is an integral part of an interdisciplinary team working with the child and his family in a terminal state. Social support is an integral part of the medical procedure in the care of hospice. This is the basis and prerequisite of full treatment and good care of the child - patient, whose illness often finds at least the expected period of his life when his personal and legal issues are not regulated, and the family burdened with the problem requires care and support specialists - professionals. Hospice for Children in Krakow: a palliative care team operating in the province of Krakow and Malopolska, conducts specialized care for terminally ill children in place of their residence from the time when parents and doctors decided to end of treatment in hospital, allows parents to carry out medical care at home, provides parents social and legal assistance and provides care, psychological support and friendship to families throughout the life of the child's illness and after his death, as long as it is needed. The social worker in a hospice does not bear the burden of solving social problems, which is the responsibility of other authorities, but provides support possible and necessary at the moment. The most common form of assistance is to provide information on benefits, which for the child and his family may be subject to any treatment and fight for the life and health of a child. Employee assists in the preparation and completion of documents, requests to increase the degree of disability because of progressive disease or Allowance care because of the inability to live independently. It works in settling all the issues with the Department of Social Security, as well as with the Municipal and District Team Affairs of disability. Seeking help and support using multi-faceted childcare. With the Centres for Social Welfare contacts are also often on the organization of additional respite care for the sick at home (care), especially in the work of the other members of the family or if the family can not cope with the care and needs extra help. Hospice for Children in Cracow completing construction of Poland's first Respite Care Centre for chronically and terminally ill children, will be an open house where children suffering from chronic and incurable diseases and their families can get professional help, whenever - when they need it. The social worker has to pick up a very important role in caring for a terminally ill child. His presence gives a little patient and family the opportunity to be at this difficult time together while organizing assistance and support.Keywords: social worker, care, terminal care, hospice
Procedia PDF Downloads 2531684 Distribution of Micro Silica Powder at a Ready Mixed Concrete
Authors: Kyong-Ku Yun, Dae-Ae Kim, Kyeo-Re Lee, Kyong Namkung, Seung-Yeon Han
Abstract:
Micro silica is collected as a by-product of the silicon and ferrosilicon alloy production in electric arc furnace using highly pure quartz, wood chips, coke and the like. It consists of about 85% of silicon which has spherical particles with an average particle size of 150 μm. The bulk density of micro silica varies from 150 to 700kg/m^3 and the fineness ranges from 150,000 to 300,000cm^2/g. An amorphous structure with a high silicon oxide content of micro silica induces an active reaction with calcium hydroxide (Ca(OH)₂) generated by the cement hydrate of a large surface area (about 20 m^² / g), and they are also known to form calcium, silicate, hydrate conjugate (C-S-H). Micro silica tends to act as a filler because of the fine particles and the spherical shape. These particles do not get covered by water and they fit well in the space between the relatively rough cement grains which does not freely fluidize concrete. On the contrary, water demand increases since micro silica particles have a tendency to absorb water because of the large surface area. The overall effect of micro silica depends on the amount of micro silica added with other parameters in the water-(cement + micro silica) ratio, and the availability of superplasticizer. In this research, it was studied on cellular sprayed concrete. This method involves a direct re-production of ready mixed concrete into a high performance at a job site. It could reduce the cost of construction by an adding a cellular and a micro silica into a ready mixed concrete truck in a field. Also, micro silica which is difficult with mixing due to high fineness in the field can be added and dispersed in concrete by increasing the fluidity of ready mixed concrete through the surface activity of cellular. Increased air content is converged to a certain level of air content by spraying and it also produces high-performance concrete by remixing of powders in the process of spraying. As it does not use a field mixing equipment the cost of construction decrease and it can be constructed after installing special spray machine in a commercial pump car. Therefore, use of special equipment is minimized, providing economic feasibility through the utilization of existing equipment. This study was carried out to evaluate a highly reliable method of confirming dispersion through a high performance cellular sprayed concrete. A mixture of 25mm coarse aggregate and river sand was applied to the concrete. In addition, by applying silica fume and foam, silica fume dispersion is confirmed in accordance with foam mixing, and the mean and standard deviation is obtained. Then variation coefficient is calculated to finally evaluate the dispersion. Comparison and analysis of before and after spraying were conducted on the experiment variables of 21L, 35L foam for each 7%, 14% silica fume respectively. Taking foam and silica fume as variables, the experiment proceed. Casting a specimen for each variable, a five-day sample is taken from each specimen for EDS test. In this study, it was examined by an experiment materials, plan and mix design, test methods, and equipment, for the evaluation of dispersion in accordance with micro silica and foam.Keywords: micro silica, distribution, ready mixed concrete, foam
Procedia PDF Downloads 2231683 AI for Efficient Geothermal Exploration and Utilization
Authors: Velimir Monty Vesselinov, Trais Kliplhuis, Hope Jasperson
Abstract:
Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal
Procedia PDF Downloads 611682 Hybrid Reusable Launch Vehicle for Space Application A Naval Approach
Authors: Rajasekar Elangopandian, Anand Shanmugam
Abstract:
In order to reduce the cost of launching satellite and payloads to the orbit this project envisages some immense combined technology. This new technology in space odyssey contains literally four concepts. The first mode in this innovation is flight mission characteristics which, says how the mission will induct. The conventional technique of magnetic levitation will help us to produce the initial thrust. The name states reusable launch vehicle shows its viability of reuseness. The flight consists miniature rocket which produces the required thrust and the two JATO (jet assisted takeoff) boosters which gives the initial boost for the vehicle. The vehicle ostensibly looks like an airplane design and will be located on the super conducting rail track. When the high power electric current given to the rail track, the vehicle starts floating as per the principle of magnetic levitation. If the flight reaches the particular takeoff distance the two boosters gets starts and will give the 48KN thrust each. Obviously it`ll follow the vertical path up to the atmosphere end/start to space. As soon as it gets its speed the two boosters will cutoff. Once it reaches the space the inbuilt spacecraft keep the satellite in the desired orbit. When the work finishes, the apogee motors gives the initial kick to the vehicle to come in to the earth’s atmosphere with 22N thrust and automatically comes to the ground by following the free fall, the help of gravitational force. After the flying region it makes the spiral flight mode then gets landing where the super conducting levitated rail track located. It will catch up the vehicle and keep it by changing the poles of magnets and varying the current. Initial cost for making this vehicle might be high but for the frequent usage this will reduce the launch cost exactly half than the now-a-days technology. The incorporation of such a mechanism gives `hybrid` and the reusability gives `reusable launch vehicle` and ultimately Hybrid reusable launch vehicle.Keywords: the two JATO (jet assisted takeoff) boosters, magnetic levitation, 48KN thrust each, 22N thrust and automatically comes to the ground
Procedia PDF Downloads 4311681 Multi-Residue Analysis (GC-ECD) of Some Organochlorine Pesticides in Commercial Broiler Meat Marketed in Shivamogga City, Karnataka State, India
Authors: L. V. Lokesha, Jagadeesh S. Sanganal, Yogesh S. Gowda, Shekhar, N. B. Shridhar, N. Prakash, Prashantkumar Waghe, H. D. Narayanaswamy, Girish V. Kumar
Abstract:
Organochlorine (OC) insecticides are among the most important organotoxins and make a large group of pesticides. Physicochemical properties of these toxins, especially their lipophilicity, facilitate the absorption and storage of these toxins in the meat thus possess public health threat to humans. The presence of these toxins in broiler meat can be a quantitative and qualitative index for the presence of these toxins in animal bodies, which is attributed to Waste water of irrigation after spraying the crops, contaminated animal feeds with pesticides, polluted air are the potential sources of residues in animal products. Fifty broiler meat samples were collected from different retail outlets of Bengaluru city, Karnataka state, in ice cold conditions and later stored under -20°C until analysis. All the samples were subjected to Gas Chromatograph attached to Electron Capture Detector(GC-ECD, VARIAN make) screening and quantification of OC pesticides viz; Alachlor, Aldrin, Alpha-BHC, Beta-BHC, Dieldrin, Delta-BHC, o,p-DDE, p,p-DDE, o,p-DDD, p,p-DDD, o,p-DDT, p,p-DDT, Endosulfan-I, Endosulfan-II, Endosulfan Sulphate and Lindane(all the standards were procured from Merck). Extraction was undertaken by blending fifty grams (g) of meat sample with 50g Sodium Sulphate anahydrous, 120 ml of n-hexane, 120 ml acetone for 15 mins, extract is washed with distilled water and sample moisture is dried by sodium sulphate anahydrous, partitioning is done with 25 ml petroleum ether, 10 ml acetonitrile and 15 ml n-hexane shake vigorously for two minutes, sample clean up was done with florosil column. The reconstituted samples (using n-hexane) (Merck chem) were injected to Gas Chromatograph–Electron Capture Detector(GC-ECD). The present study reveals that, among the fifty chicken samples subjected for analysis, 60% (15/50), 32% (8/50), 28% (7/50), 20% (5/50) and 16% (4/50) of samples contaminated with DDTs, Delta-BHC, Dieldrin, Aldrin and Alachlor respectively. DDT metabolites, Delta-BHC were the most frequently detected OC pesticides. The detected levels of the pesticides were below the levels of MRL(according to Export Council of India notification for fresh poultry meat).Keywords: accuracy, gas chromatography, meat, pesticide, petroleum ether
Procedia PDF Downloads 3301680 Evaluation of Sustained Improvement in Trauma Education Approaches for the College of Emergency Nursing Australasia Trauma Nursing Program
Authors: Pauline Calleja, Brooke Alexander
Abstract:
In 2010 the College of Emergency Nursing Australasia (CENA) undertook sole administration of the Trauma Nursing Program (TNP) across Australia. The original TNP was developed from recommendations by the Review of Trauma and Emergency Services-Victoria. While participant and faculty feedback about the program was positive, issues were identified that were common for industry training programs in Australia. These issues included didactic approaches, with many lectures and little interaction/activity for participants. Participants were not necessarily encouraged to undertake deep learning due to the teaching and learning principles underpinning the course, and thus participants described having to learn by rote, and only gain a surface understanding of principles that were not always applied to their working context. In Australia, a trauma or emergency nurse may work in variable contexts that impact on practice, especially where resources influence scope and capacity of hospitals to provide trauma care. In 2011, a program review was undertaken resulting in major changes to the curriculum, teaching, learning and assessment approaches. The aim was to improve learning including a greater emphasis on pre-program preparation for participants, the learning environment and clinically applicable contextualized outcomes participants experienced. Previously if participants wished to undertake assessment, they were given a take home examination. The assessment had poor uptake and return, and provided no rigor since assessment was not invigilated. A new assessment structure was enacted with an invigilated examination during course hours. These changes were implemented in early 2012 with great improvement in both faculty and participant satisfaction. This presentation reports on a comparison of participant evaluations collected from courses post implementation in 2012 and in 2015 to evaluate if positive changes were sustained. Methods: Descriptive statistics were applied in analyzing evaluations. Since all questions had more than 20% of cells with a count of <5, Fisher’s Exact Test was used to identify significance (p = <0.05) between groups. Results: A total of fourteen group evaluations were included in this analysis, seven CENA TNP groups from 2012 and seven from 2015 (randomly chosen). A total of 173 participant evaluations were collated (n = 81 from 2012 and 92 from 2015). All course evaluations were anonymous, and nine of the original 14 questions were applicable for this evaluation. All questions were rated by participants on a five-point Likert scale. While all items showed improvement from 2012 to 2015, significant improvement was noted in two items. These were in regard to the content being delivered in a way that met participant learning needs and satisfaction with the length and pace of the program. Evaluation of written comments supports these results. Discussion: The aim of redeveloping the CENA TNP was to improve learning and satisfaction for participants. These results demonstrate that initial improvements in 2012 were able to be maintained and in two essential areas significantly improved. Changes that increased participant engagement, support and contextualization of course materials were essential for CENA TNP evolution.Keywords: emergency nursing education, industry training programs, teaching and learning, trauma education
Procedia PDF Downloads 276