Search results for: field change notice
1093 Status of Vocational Education and Training in India: Policies and Practices
Authors: Vineeta Sirohi
Abstract:
The development of critical skills and competencies becomes imperative for young people to cope with the unpredicted challenges of the time and prepare for work and life. Recognizing that education has a critical role in reaching sustainability goals as emphasized by 2030 agenda for sustainability development, educating youth in global competence, meta-cognitive competencies, and skills from the initial stages of formal education are vital. Further, educating for global competence would help in developing work readiness and boost employability. Vocational education and training in India as envisaged in various policy documents remain marginalized in practice as compared to general education. The country is still far away from the national policy goal of tracking 25% of the secondary students at grade eleven and twelve under the vocational stream. In recent years, the importance of skill development has been recognized in the present context of globalization and change in the demographic structure of the Indian population. As a result, it has become a national policy priority and taken up with renewed focus by the government, which has set the target of skilling 500 million people by 2022. This paper provides an overview of the policies, practices, and current status of vocational education and training in India supported by statistics from the National Sample Survey, the official statistics of India. The national policy documents and annual reports of the organizations actively involved in vocational education and training have also been examined to capture relevant data and information. It has also highlighted major initiatives taken by the government to promote skill development. The data indicates that in the age group 15-59 years, only 2.2 percent reported having received formal vocational training, and 8.6 percent have received non-formal vocational training, whereas 88.3 percent did not receive any vocational training. At present, the coverage of vocational education is abysmal as less than 5 percent of the students are covered by the vocational education programme. Besides, launching various schemes to address the mismatch of skills supply and demand, the government through its National Policy on Skill Development and Entrepreneurship 2015 proposes to bring about inclusivity by bridging the gender, social and sectoral divide, ensuring that the skilling needs of socially disadvantaged and marginalized groups are appropriately addressed. It is fundamental that the curriculum is aligned with the demands of the labor market, incorporating more of the entrepreneur skills. Creating nonfarm employment opportunities for educated youth will be a challenge for the country in the near future. Hence, there is a need to formulate specific skill development programs for this sector and also programs for upgrading their skills to enhance their employability. There is a need to promote female participation in work and in non-traditional courses. Moreover, rigorous research and development of a robust information base for skills are required to inform policy decisions on vocational education and training.Keywords: policy, skill, training, vocational education
Procedia PDF Downloads 1531092 Digital Value Co-Creation: The Case of Worthy a Virtual Collaborative Museum across Europe
Authors: Camilla Marini, Deborah Agostino
Abstract:
Cultural institutions provide more than service-based offers; indeed, they are experience-based contexts. A cultural experience is a special event that encompasses a wide range of values which, for visitors, are primarily cultural rather than economic and financial. Cultural institutions have always been characterized by inclusivity and participatory practices, but the upcoming of digital technologies has put forward their interest in collaborative practices and the relationship with their audience. Indeed, digital technologies highly affected the cultural experience as it was conceived. Especially, museums, as traditional and authoritative cultural institutions, have been highly challenged by digital technologies. They shifted by a collection-oriented toward a visitor-centered approach, and digital technologies generated a highly interactive ecosystem in which visitors have an active role, shaping their own cultural experience. Most of the studies that investigate value co-creation in museums adopt a single perspective which is separately one of the museums or one of the users, but the analysis of the convergence/divergence of these perspectives is still emphasized. Additionally, many contributions focus on digital value co-creation as an outcome rather than as a process. The study aims to provide a joint perspective on digital value co-creation which include both museum and visitors. Also, it deepens the contribution of digital technologies in the value co-creation process, addressing the following research questions: (i) what are the convergence/divergence drivers on digital value co-creation and (ii) how digital technologies can be means of value co-creation? The study adopts an action research methodology that is based on the case of WORTHY, an educational project which involves cultural institutions and schools all around Europe, creating a virtual collaborative museum. It represents a valuable case for the aim of the study since it has digital technologies at its core, and the interaction through digital technologies is fundamental, all along with the experience. Action research has been identified as the most appropriate methodology for researchers to have direct contact with the field. Data have been collected through primary and secondary sources. Cultural mediators such as museums, teachers and students’ families have been interviewed, while a focus group has been designed to interact with students, investigating all the aspects of the cultural experience. Secondary sources encompassed project reports and website contents in order to deepen the perspective of cultural institutions. Preliminary findings highlight the dimensions of digital value co-creation in cultural institutions from a museum-visitor integrated perspective and the contribution of digital technologies in the value co-creation process. The study outlines a two-folded contribution that encompasses both an academic and a practitioner level. Indeed, it contributes to fulfilling the gap in cultural management literature about the convergence/divergence of service provider-user perspectives but it also provides cultural professionals with guidelines on how to evaluate the digital value co-creation process.Keywords: co-creation, digital technologies, museum, value
Procedia PDF Downloads 1471091 Cytotoxicity and Genotoxicity of Glyphosate and Its Two Impurities in Human Peripheral Blood Mononuclear Cells
Authors: Marta Kwiatkowska, Paweł Jarosiewicz, Bożena Bukowska
Abstract:
Glyphosate (N-phosphonomethylglycine) is a non-selected broad spectrum ingredient in the herbicide (Roundup) used for over 35 years for the protection of agricultural and horticultural crops. Glyphosate was believed to be environmentally friendly but recently, a large body of evidence has revealed that glyphosate can negatively affect on environment and humans. It has been found that glyphosate is present in the soil and groundwater. It can also enter human body which results in its occurrence in blood in low concentrations of 73.6 ± 28.2 ng/ml. Research conducted for potential genotoxicity and cytotoxicity can be an important element in determining the toxic effect of glyphosate. Due to regulation of European Parliament 1107/2009 it is important to assess genotoxicity and cytotoxicity not only for the parent substance but also its impurities, which are formed at different stages of production of major substance – glyphosate. Moreover verifying, which of these compounds are more toxic is required. Understanding of the molecular pathways of action is extremely important in the context of the environmental risk assessment. In 2002, the European Union has decided that glyphosate is not genotoxic. Unfortunately, recently performed studies around the world achieved results which contest decision taken by the committee of the European Union. World Health Organization (WHO) in March 2015 has decided to change the classification of glyphosate to category 2A, which means that the compound is considered to "probably carcinogenic to humans". This category relates to compounds for which there is limited evidence of carcinogenicity to humans and sufficient evidence of carcinogenicity on experimental animals. That is why we have investigated genotoxicity and cytotoxicity effects of the most commonly used pesticide: glyphosate and its impurities: N-(phosphonomethyl)iminodiacetic acid (PMIDA) and bis-(phosphonomethyl)amine on human peripheral blood mononuclear cells (PBMCs), mostly lymphocytes. DNA damage (analysis of DNA strand-breaks) using the single cell gel electrophoresis (comet assay) and ATP level were assessed. Cells were incubated with glyphosate and its impurities: PMIDA and bis-(phosphonomethyl)amine at concentrations from 0.01 to 10 mM for 24 hours. Evaluating genotoxicity using the comet assay showed a concentration-dependent increase in DNA damage for all compounds studied. ATP level was decreased to zero as a result of using the highest concentration of two investigated impurities, like bis-(phosphonomethyl)amine and PMIDA. Changes were observed using the highest concentration at which a person can be exposed as a result of acute intoxication. Our survey leads to a conclusion that the investigated compounds exhibited genotoxic and cytotoxic potential but only in high concentrations, to which people are not exposed environmentally. Acknowledgments: This work was supported by the Polish National Science Centre (Contract-2013/11/N/NZ7/00371), MSc Marta Kwiatkowska, project manager.Keywords: cell viability, DNA damage, glyphosate, impurities, peripheral blood mononuclear cells
Procedia PDF Downloads 4821090 Analysis of the Relationship between Micro-Regional Human Development and Brazil's Greenhouse Gases Emission
Authors: Geanderson Eduardo Ambrósio, Dênis Antônio Da Cunha, Marcel Viana Pires
Abstract:
Historically, human development has been based on economic gains associated with intensive energy activities, which often are exhaustive in the emission of Greenhouse Gases (GHGs). It requires the establishment of targets for mitigation of GHGs in order to disassociate the human development from emissions and prevent further climate change. Brazil presents itself as one of the most GHGs emitters and it is of critical importance to discuss such reductions in intra-national framework with the objective of distributional equity to explore its full mitigation potential without compromising the development of less developed societies. This research displays some incipient considerations about which Brazil’s micro-regions should reduce, when the reductions should be initiated and what its magnitude should be. We started with the methodological assumption that human development and GHGs emissions arise in the future as their behavior was observed in the past. Furthermore, we assume that once a micro-region became developed, it is able to maintain gains in human development without the need of keep growing GHGs emissions rates. The human development index and the carbon dioxide equivalent emissions (CO2e) were extrapolated to the year 2050, which allowed us to calculate when the micro-regions will become developed and the mass of GHG’s emitted. The results indicate that Brazil must throw 300 GT CO2e in the atmosphere between 2011 and 2050, of which only 50 GT will be issued by micro-regions before it’s develop and 250 GT will be released after development. We also determined national mitigation targets and structured reduction schemes where only the developed micro-regions would be required to reduce. The micro-region of São Paulo, the most developed of the country, should be also the one that reduces emissions at most, emitting, in 2050, 90% less than the value observed in 2010. On the other hand, less developed micro-regions will be responsible for less impactful reductions, i.e. Vale do Ipanema will issue in 2050 only 10% below the value observed in 2010. Such methodological assumption would lead the country to issue, in 2050, 56.5% lower than that observed in 2010, so that the cumulative emissions between 2011 and 2050 would reduce by 130 GT CO2e over the initial projection. The fact of associating the magnitude of the reductions to the level of human development of the micro-regions encourages the adoption of policies that favor both variables as the governmental planner will have to deal with both the increasing demand for higher standards of living and with the increasing magnitude of reducing emissions. However, if economic agents do not act proactively in local and national level, the country is closer to the scenario in which emits more than the one in which mitigates emissions. The research highlighted the importance of considering the heterogeneity in determining individual mitigation targets and also ratified the theoretical and methodological feasibility to allocate larger share of contribution for those who historically emitted more. It is understood that the proposals and discussions presented should be considered in mitigation policy formulation in Brazil regardless of the adopted reduction target.Keywords: greenhouse gases, human development, mitigation, intensive energy activities
Procedia PDF Downloads 3201089 Examining the Investment Behavior of Arab Women in the Stock Market
Authors: Razan Salem
Abstract:
Gender plays a vital role in the stock markets because men and women differ in their behavior when investing in stocks. Accordingly, the role of gender differences in investment behavior is an increasingly important strand in the field of behavioral finance research. The investment behaviors of women relative to men have been examined in the behavioral finance literature, mainly for comparison purposes. Women's roles in the stock market have not been examined in the behavioral finance literature, however, particularly with respect to the Arab region. This study aims to contribute towards a better understanding of the investment behavior of Arab women (in regards to their risk tolerance, investment confidence, and investment literacy levels) relative to Arab men; using a sample from Arab women and men investors living in Saudi Arabia and Jordan. In order to achieve the study's main aim, the researcher used non-parametric tests, as Mann-Whitney U test, along with frequency distribution analysis to analyze the study’s primary data. The researcher distributed close-ended online questionnaires to a sample of 550 Arab male and female individuals investing in stocks in both Saudi Arabia and Jordan. The results confirm that the sample Arab women invest less in stocks compared to Arab men due to their risk-averse behaviors and limited confidence levels. The results also reveal that due to Arab women’s very low investment literacy levels, they fear from taking the risk and invest often in stocks relative to Arab men. Overall, the study’s main variables (risk tolerance, investment confidence, and investment literacy levels) have a combined effect on the investment behavior of Arab women and their limited participation in the stock market. Hence, this study is one of the very first studies that indicate the combined effect of the three main variables (which are usually studied separately in the existing literature) on the investment behavior of women, particularly Arab women. This study makes three important contributions to the growing literature on gender differences in investment behavior. First, while the behavioral finance literature documents evidence on gender differences in investment behaviors in many developed countries, there are very limited studies that investigate such differences in Arab countries. Arab women investors, generally, are ignored from the behavioral finance literature due probably to cultural barriers and data collection difficulties. Thus, this study extends the literature to include Arab women and their investment behaviors when trading stock relative to Arab men. Moreover, the study associates women investment literacy and confidence levels with their financial risk behaviors and participation in the stock market. This study provides direct evidence on Arab women's investment behaviors when trading stocks. Overall, studying Arab women investors is important to investigate whether the investment behavior identified for Western women investors are also found in Arab women investors.Keywords: Arab women, gender differences, investment behavior, stock markets
Procedia PDF Downloads 1811088 An Acyclic Zincgermylene: Rapid H₂ Activation
Authors: Martin Juckel
Abstract:
Probably no other field of inorganic chemistry has undergone such a rapid development in the past two decades than the low oxidation state chemistry of main group elements. This rapid development has only been possible by the development of new bulky ligands. In case of our research group, super-bulky monodentate amido ligands and β-diketiminate ligands have been used to a great success. We first synthesized the unprecedented magnesium(I) dimer [ᴹᵉˢNacnacMg]₂ (ᴹᵉˢNacnac = [(ᴹᵉˢNCMe)₂CH]-; Mes = mesityl, which has since been used both as reducing agent and also for the synthesis of new metal-magnesium bonds. In case of the zinc bromide precursor [L*ZnBr] (L*=(N(Ar*)(SiPri₃); (Ar* = C₆H₂{C(H)Ph₂}₂Me-2,6,4, the reduction with [ᴹᵉˢNacnacMg]₂ led to such a metal-magnesium bond. This [L*ZnMg(ᴹᵉˢNacnac)] compound can be seen as an ‘inorganic Grignard reagent’, which can be used to transfer the metal fragment onto other functional groups or other metal centers; just like the conventional Grignard reagent. By simple addition of (TBoN)GeCl (TBoN = N(SiMe₃){B(DipNCH)₂) to the aforesaid compound, we were able to transfer the amido-zinc fragment to the Ge center of the germylene starting material and to synthesize the first example of a germanium(II)-zinc bond: [:Ge(TBoN)(ZnL*)]. While these reactions typically led to complex product mixture, [:Ge(TBoN)(ZnL*)] could be isolated as dark blue crystals in a good yield. This new compound shows interesting reactivity towards small molecules, especially dihydrogen gas. This is of special interest as dihydrogen is one of the more difficult small molecules to activate, due to its strong (BDE = 108 kcal/mol) and non-polar bond. In this context, the interaction between H₂ σ-bond with the tetrelylene p-Orbital (LUMO), with concomitant donation of the tetrelylene lone pair (HOMO) into the H₂ σ* orbital are responsible for the activation of dihydrogen gas. Accordingly, the narrower the HOMO-LUMO gap of tertelylene, the more reactivity towards H₂ it typically is. The aim of a narrow HOMO-LUMO gap was reached by transferring electropositive substituents respectively metal substituents with relatively low Pauling electronegativity (zinc: 1.65) onto the Ge center (here: the zinc-amido fragment). In consideration of the unprecedented reactivity of [:Ge(TBoN)(ZnL*)], a computational examination of its frontier orbital energies was undertaken. The energy separation between the HOMO, which has significant Ge lone pair character, and the LUMO, which has predominantly Ge p-orbital character, is narrow (40.8 kcal/mol; cf.∆S-T= 24.8 kcal/mol), and comparable to the HOMO-LUMO gaps calculated for other literature known complexes). The calculated very narrow HOMO-LUMO gap for the [:Ge(TBoN)(ZnL*)] complex is consistent with its high reactivity, and is remarkable considering that it incorporates a π-basic amide ligand, which are known to raise the LUMO of germylenes considerably.Keywords: activation of dihydrogen gas, narrow HOMO-LUMO gap, first germanium(II)-zinc bond, inorganic Grignard reagent
Procedia PDF Downloads 1821087 Impact of 6-Week Brain Endurance Training on Cognitive and Cycling Performance in Highly Trained Individuals
Authors: W. Staiano, S. Marcora
Abstract:
Introduction: It has been proposed that acute negative effect of mental fatigue (MF) could potentially become a training stimulus for the brain (Brain endurance training (BET)) to adapt and improve its ability to attenuate MF states during sport competitions. Purpose: The aim of this study was to test the efficacy of 6 weeks of BET on cognitive and cycling tests in a group of well-trained subjects. We hypothesised that combination of BET and standard physical training (SPT) would increase cognitive capacity and cycling performance by reducing rating of perceived exertion (RPE) and increase resilience to fatigue more than SPT alone. Methods: In a randomized controlled trial design, 26 well trained participants, after a familiarization session, cycled to exhaustion (TTE) at 80% peak power output (PPO) and, after 90 min rest, at 65% PPO, before and after random allocation to a 6 week BET or active placebo control. Cognitive performance was measured using 30 min of STROOP coloured task performed before cycling performance. During the training, BET group performed a series of cognitive tasks for a total of 30 sessions (5 sessions per week) with duration increasing from 30 to 60 min per session. Placebo engaged in a breathing relaxation training. Both groups were monitored for physical training and were naïve to the purpose of the study. Physiological and perceptual parameters of heart rate, lactate (LA) and RPE were recorded during cycling performances, while subjective workload (NASA TLX scale) was measured during the training. Results: Group (BET vs. Placebo) x Test (Pre-test vs. Post-test) mixed model ANOVA’s revealed significant interaction for performance at 80% PPO (p = .038) or 65% PPO (p = .011). In both tests, groups improved their TTE performance; however, BET group improved significantly more compared to placebo. No significant differences were found for heart rate during the TTE cycling tests. LA did not change significantly at rest in both groups. However, at completion of 65% TTE, it was significantly higher (p = 0.043) in the placebo condition compared to BET. RPE measured at ISO-time in BET was significantly lower (80% PPO, p = 0.041; 65% PPO p= 0.021) compared to placebo. Cognitive results in the STROOP task showed that reaction time in both groups decreased at post-test. However, BET decreased significantly (p = 0.01) more compared to placebo despite no differences accuracy. During training sessions, participants in the BET showed, through NASA TLX questionnaires, constantly significantly higher (p < 0.01) mental demand rates compared to placebo. No significant differences were found for physical demand. Conclusion: The results of this study provide evidences that combining BET and SPT seems to be more effective than SPT alone in increasing cognitive and cycling performance in well trained endurance participants. The cognitive overload produced during the 6-week training of BET can induce a reduction in perception of effort at a specific power, and thus improving cycling performance. Moreover, it provides evidence that including neurocognitive interventions will benefit athletes by increasing their mental resilience, without affecting their physical training load and routine.Keywords: cognitive training, perception of effort, endurance performance, neuro-performance
Procedia PDF Downloads 1201086 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ
Authors: Lalita, Niladri Sarkar, Subhasis Ghosh
Abstract:
Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal
Procedia PDF Downloads 601085 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 1701084 Time of Death Determination in Medicolegal Death Investigations
Authors: Michelle Rippy
Abstract:
Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic
Procedia PDF Downloads 1181083 Functional Switching of Serratia marcescens Transcriptional Regulator from Activator to Inhibitor of Quorum Sensing by Exogenous Addition
Authors: Norihiro Kato, Yuriko Takayama
Abstract:
Some gram-negative bacteria enable the simultaneous activation of gene expression involved in N-acylhomoserine lactone (AHL) dependent cell-to-cell communication system. Such regulatory system for the bacterial group behavior is termed as quorum sensing (QS) because a diffusible AHL signal can accumulate around the cell during the increase of the cell density and trigger activation of the sequential QS process. By blocking the QS, the expression of diverse genes related to infection, antibiotic production, and biofilm formation is inhibited. Conditioning of QS by regulation of the DNA-receptor-AHL interaction is a potential target for enhancing host defenses against pathogenicity. We focused on engineered application of transcriptional regulator SpnR produced in opportunistic human pathogen Serratia marcescens. The SpnR can interact with AHL signals at an N-terminal domain and also with a promoter region of a QS target gene at a C-terminal domain. As the initial process of the QS activation, the SpnR forms a complex with the AHL to enhance the expression of pig cluster; the SpnR normally acts as an activator for the expression of the QS-dependent gene. In this research, we attempt to artificially control QS by changing the role of SpnR. The QS-dependent prodigiosin production is expected to inhibit by externally added SpnR in the culture broth of AS-1 strain because the AHL concentration was kept below the threshold by AHL-SpnR complex formation. Maltose-binding protein (MBP)-tagged SpnR (MBP-SpnR) was overexpressed in Escherichia coli and purified using an affinity chromatography equipped with an amylose resin column. The specific interaction between AHL and MBP-SpnR was demonstrated by quartz crystal microbalance (QCM) sensor. AHL with amino end-group was coupled with COOH-terminated self-assembled monolayer prepared on a gold electrode of 27-MHz quartz crystal sensor using water-soluble carbodiimide. After the injection of MBP-SpnR into a cup-type sensor cell filled with the buffer solution, time course of resonant frequency change (ΔFs) was determined. A decrease of ΔFs clearly showed the uptake of MBP-SpnR onto the AHL-immobilized electrode. Furthermore, no binding affinity was observed after the heat-inactivation of MBP-SpnR at 80ºC. These results suggest that MBP-SpnR possesses a specific affinity for AHL. MBP-SpnR was added to the culture medium as an AHL trap to study inhibitory effects on intracellularly accumulated prodigiosin. With approximately 2 µM MBP-SpnR, the amount of prodigiosin induced was half that of the control without any additives. In conclusion, the function of SpnR could be switched by adding it to the cell culture. Exogenously added MBP-SpnR possesses high affinity for AHL derived from cells and acts as an inhibitor of AHL-mediated QS.Keywords: intracellular signaling, microbial biotechnology, quorum sensing, transcriptional regulator
Procedia PDF Downloads 2671082 Socio-Cultural Economic and Demographic Profile of Return Migration: A Case Study of Mahaboobnagar District in ‘Andhra Pradesh’
Authors: Ramanamurthi Botlagunta
Abstract:
Return migrate on is a process; it’s not a new phenomenal. People are migrating since civilization started. In the case of Indian Diaspora, peoples migrated before the Independence of India. Even after the independence. There are various reasons for the migration. According to the characteristics of the migrants, geographical, political, and economic factors there are many changes occur in the mode of migration. In India currently almost 25 million peoples are outside of the country. But all of them not able to get the immigrants status in their respective host society due to the nature of individual perception and the immigration policies of the host countries. They came back to homeland after spending days/months/years. They are known as the return migrants. Returning migrants are 'persons returning to their country of citizenship after having been international migrants, whether short term or long-term'. Increasingly, migration is seen very differently from what was once believed to be a one-way phenomenon. The renewed interest of return migration can be seen through two aspects one is that growing importance of temporary migration programmers in other countries and other one is that potential role of migrants in developing their home countries. Conceptualized return migration in several ways: occasional return, seasonal return, temporary return, permanent return, and circular return. The reasons for the return migration are retirement, failure to assimilate in the host country, problems with acculturation in the destination country, being unsuccessful in the emigrating country, acquiring the desired wealth, innovate and to serve as change agents in the birth country. With the advent of globalization and the rapid development of transportation systems and communication technologies, this is a process by which immigrants forge and sustain simultaneous multi-stranded social relations that link together their societies of origin and settlement. We can find that Current theories of transnational migration are greatly focused on the economic impacts on the home countries, while social, cultural and political impacts have recently started gaining momentum. This, however, has been changing as globalization is radically transforming the way people move around the world. One of the reasons for the return migration is that lack of proportionate representation of Asian immigrants in positions of authority and decision-making can be a result of challenges confronted in cultural and structural assimilation. The present study mainly focuses socioeconomic and demographic profile of return migration of Indians from other countries in general and particularly on Andhra Pradesh the people who are returning from other countries. Migration is that lack of proportionate representation of Asian immigrants in positions of authority and decision-making can be a result of challenges confronted in cultural and structural assimilation. The present study mainly focuses socioeconomic and demographic profile of return migration of Indians from other countries in general and particularly on Andhra Pradesh the people who are returning from other countries.Keywords: migration, return migration, globalization, development, socio- economic, Asian immigrants, UN, Andhra Pradesh
Procedia PDF Downloads 3721081 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments
Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz
Abstract:
Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.Keywords: LSTMs, streamflow, hyperparameters, hydrology
Procedia PDF Downloads 701080 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 1081079 A Descriptive Study on Syrian Entrepreneurs in Turkey
Authors: Rudainah Alkhazam, Özlem Yaşar Uğurlu
Abstract:
Immigrant entrepreneurship arises from the start of entrepreneurial activity by immigrants in the country they relocate to. The future prosperity and stability of the refugee-hosting countries depends on the mutual social and economic benefits between the residents and the refugees. Syrian refugees and workers in host countries necessitate efforts to assist their residents and refugees in meeting their daily needs, contributing lawfully to local and possibly regional economies through trade, and instilling hope in their future. This study investigates the effects of Syrian refugee entrepreneurs on host communities' business sectors, focusing on Turkey. Specifically, we examine entrepreneurship in general and its role in the country's economy. Because Turkey is the most popular resettlement destination for Syrian refugees, this study will shed light on the challenges of successful migrant entrepreneurship in Turkey and their role in the business sector. The research relies on a mixed-method approach which helps identify recurring themes, favorable results, and conflicting results across methods, allowing us to draw accurate conclusions. The study will adopt a quantitative method in collecting numerical data from Syrian refugees in Turkey. The self-administered survey would be translated into Arabic to ensure that the respondents understood the questions and possible replies. The research will use survey questionnaires to gather the majority of the data. These surveys would have closed-ended questions with nominal ratio and Likert scales. The data will be analyzed using linear regression and the Statistical Package for Social Sciences (SPSS) to ascertain the role of Syrian entrepreneurs in the business sectors of Turkey. The research will use the findings to make future recommendations. Syrian entrepreneurs, among the migrant entrepreneurs, contribute to the labor market, the majority of whom are young people. This research noted the significant participation of Syrian immigrant women in the entrepreneurship sector. The previous experience of Syrians in the field of trade and running their own business plays a vital role in the success of their business in the host countries. The study shows that Syrian entrepreneurs could integrate effectively into the various Turkish business sectors and could rely on themselves, open and manage their projects, and market them in the Turkish market. Syrian entrepreneurs consider that the investment and labor laws, commercial arrangements, and facilities for obtaining financial resources in Turkey need to be more flexible and available to immigrant entrepreneurs.Keywords: entrepreneurship, immigration, Syrian, Turkey, refugees, investors, socio-economic benefits, unemployment
Procedia PDF Downloads 651078 The Study of Difficulties of Understanding Idiomatic Expressions Encountered by Translators 2021
Authors: Mohamed Elmogbail
Abstract:
The present study aimed at investigating difficulties those Translators encounter in understanding idiomatic expressions between Arabic and English languages. To achieve this goal, the researcher raised the three questions are:(1) What are the major difficulties that translators encounter in translating idiomatic expressions? (2) What factors cause such difficulties that translators encountered in translating idiomatic expressions? (3) What are the possible techniques that should be followed to overcome these difficulties? To answer these questions, the researcher designed questionnaire Table (2) and mentioned tables related to Test Show the second question in the study is about the factors that stand behind the challenges. Translators encounter while translating idiomatic expressions. The translators asked Provided the following factors:1- Because of lack of exposure to the source culture, they do not know the connotations of the cultural words that are related to the environment, food, folklore 2- Misusing dictionaries made the participants unable to find a proper target language idiomatic expression. 3-Lack of using idiomatic expressions in daily life. Table (3): (Questionnaire) Results to the table (3) Questions Of the study are About suggestions that can be inferred to handle these challenges. The questioned translators provided the following solutions:1- translators must be exposed to source language culture, including religion, habits, and traditions.2- translators should also be exposed to source language idiomatic expressions by introducing English culture in textbooks and through participating in extensive English culture courses.3- translators should be familiar with the differences between source and target language cultures.4- translators should avoid literal translation that results in most cases in wrong or poor translation.5- Schools, universities, and institutions should introduce translators to English culture.6- translators should participate in cultural workshops at universities.7- translators should try to use idiomatic expressions in everyday situations.8- translators should read more idiomatic expressions books. And researcher also designed a translation test consisted of 40 excerpts given to a random sample of 100 Translators in Khartoum capital of Sudan to translate them. After Collected data for the study, the researcher proceeded to a more detailed analysis, the methodology used in the analysis of idiomatic expressions Is empirical and descriptive. This study is qualitative by nature, but the quantitative method used the analysis of the data. Some figure and statistics are used, such as (statistical package for the social sciences). The researcher calculated the percentage proportion of each translation expressions. And compared them to each other. The finding of the study showed that most translations are inadequate as the translators faced difficulties while communication, these difficulties were mostly due to their unfamiliarity with idiomatic expressions producing improper equivalence in the communication, and not being able to use translation techniques as required, and resorted to literal translation, furthermore, the study recommended that more comprehensive studies to executed on translating idiomatic expressions to enrich the translation field.Keywords: translation, translators, idioms., expressions
Procedia PDF Downloads 1471077 The Charge Exchange and Mixture Formation Model in the ASz-62IR Radial Aircraft Engine
Authors: Pawel Magryta, Tytus Tulwin, Paweł Karpiński
Abstract:
The ASz62IR engine is a radial aircraft engine with 9 cylinders. This object is produced by the Polish company WSK "PZL-KALISZ" S.A. This is engine is currently being developed by the above company and Lublin University of Technology. In order to provide an effective work of the technological development of this unit it was decided to made the simulation model. The model of ASz-62IR was developed with AVL BOOST software which is a tool dedicated to the one-dimensional modeling of internal combustion engines. This model can be used to calculate parameters of an air and fuel flow in an intake system including charging devices as well as combustion and exhaust flow to the environment. The main purpose of this model is the analysis of the charge exchange and mixture formation in this engine. For this purpose, the model consists of elements such: as air inlet, throttle system, compressor connector, charging compressor, inlet pipes and injectors, outlet pipes, fuel injection and model of fuel mixing and evaporation. The model of charge exchange and mixture formation was based on the model of mass flow rate in intake and exhaust pipes, and also on the calculation of gas properties values like gas constant or thermal capacity. This model was based on the equations to describe isentropic flow. The energy equation to describe flow under steady conditions was transformed into the mass flow equation. In the model the flow coefficient μσ was used, that varies with the stroke/valve opening and was determined in a steady flow state. The geometry of the inlet channels and other key components was mapped with reference to the technical documentation of the engine and empirical measurements of the structure elements. The volume of elements on the charge flow path between the air inlet and the exhaust outlet was measured by the CAD mapping of the structure. Taken from the technical documentation, the original characteristics of the compressor engine was entered into the model. Additionally, the model uses a general model for the transport of chemical compounds of the mixture. There are 7 compounds used, i.e. fuel, O2, N2, CO2, H2O, CO, H2. A gasoline fuel of a calorific value of 43.5 MJ/kg and an air mass fraction for stoichiometric mixture of 14.5 were used. Indirect injection into the intake manifold is used in this model. The model assumes the following simplifications: the mixture is homogenous at the beginning of combustion, accordingly, mixture stoichiometric coefficient A/F remains constant during combustion, combusted and non-combusted charges show identical pressures and temperatures although their compositions change. As a result of the simulation studies based on the model described above, the basic parameters of combustion process, charge exchange, mixture formation in cylinders were obtained. The AVL Boost software is very useful for the piston engine performance simulations. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: aviation propulsion, AVL Boost, engine model, charge exchange, mixture formation
Procedia PDF Downloads 3381076 A Systematic Analysis of Knowledge Development Trends in Industrial Maintenance Projects
Authors: Lilian Ogechi Iheukwumere-Esotu, Akilu Yunusa-Kaltungo, Paul Chan
Abstract:
Industrial assets are prone to degradation and eventual failures due to repetitive loads and harsh environments in which they operate. These failures often lead to costly downtimes, which may involve loss of critical assets and/or human lives. The rising pressures from stakeholders for optimized systems’ outputs have further placed strains on business organizations. Traditional means of combating such failures are by adopting strategies capable of predicting, controlling, and/or reducing the likelihood of systems’ failures. Turnarounds, shutdowns, and outages (TSOs) projects are popular maintenance management activities conducted over a certain period of time. However, despite the critical and significant cost implications of TSOs, the management of the interface of knowledge between academia and industry to our best knowledge has not been fully explored in comparison to other aspects of industrial operations. This is perhaps one of the reasons for the limited knowledge transfer between academia and industry, which has affected the outcomes of most TSOs. Prior to now, the study of knowledge development trends as a failure analysis tool in the management of TSOs projects have not gained the required level of attention. Hence, this review provides useful references and their implications for future studies in this field. This study aims to harmonize the existing research trends of TSOs through a systematic review of more than 3,000 research articles published over 7 decades (1940- till date) which were extracted using very specific research criteria and later streamlined using nominated inclusion and exclusion parameters. The information obtained from the analysis were then synthesized and coded into 8 parameters, thereby allowing for a transformation into actionable outputs. The study revealed a variety of information, but the most critical findings can be classified into 4 folds: (1) Empirical validation of available conceptual frameworks and models is still a far cry in practice, (2) traditional project management views for managing uncertainties are still dominant, (3) Inconsistent approaches towards the adoption and promotion of knowledge management systems which supports creation, transfer and application of knowledge within and outside the project organization and, (4) exploration of social practices in industrial maintenance project environments are under-represented within the existing body of knowledge. Thus, the intention of this study is to depict the usefulness of a framework which incorporates fact findings emanating from careful analysis and illustrations of evidence based results as a suitable approach which can tackle reoccurring failures in industrial maintenance projects.Keywords: industrial maintenance, knowledge management, maintenance projects, systematic review, TSOs
Procedia PDF Downloads 1181075 A Dynamic Model for Circularity Assessment of Nutrient Recovery from Domestic Sewage
Authors: Anurag Bhambhani, Jan Peter Van Der Hoek, Zoran Kapelan
Abstract:
The food system depends on the availability of Phosphorus (P) and Nitrogen (N). Growing population, depleting Phosphorus reserves and energy-intensive industrial nitrogen fixation are threats to their future availability. Recovering P and N from domestic sewage water offers a solution. Recovered P and N can be applied to agricultural land, replacing virgin P and N. Thus, recovery from sewage water offers a solution befitting a circular economy. To ensure minimum waste and maximum resource efficiency a circularity assessment method is crucial to optimize nutrient flows and minimize losses. Material Circularity Indicator (MCI) is a useful method to quantify the circularity of materials. It was developed for materials that remain within the market and recently extended to include biotic materials that may be composted or used for energy recovery after end-of-use. However, MCI has not been used in the context of nutrient recovery. Besides, MCI is time-static, i.e., it cannot account for dynamic systems such as the terrestrial nutrient cycles. Nutrient application to agricultural land is a highly dynamic process wherein flows and stocks change with time. The rate of recycling of nutrients in nature can depend on numerous factors such as prevailing soil conditions, local hydrology, the presence of animals, etc. Therefore, a dynamic model of nutrient flows with indicators is needed for the circularity assessment. A simple substance flow model of P and N will be developed with the help of flow equations and transfer coefficients that incorporate the nutrient recovery step along with the agricultural application, the volatilization and leaching processes, plant uptake and subsequent animal and human uptake. The model is then used for calculating the proportions of linear and restorative flows (coming from reused/recycled sources). The model will simulate the adsorption process based on the quantity of adsorbent and nutrient concentration in the water. Thereafter, the application of the adsorbed nutrients to agricultural land will be simulated based on adsorbate release kinetics, local soil conditions, hydrology, vegetation, etc. Based on the model, the restorative nutrient flow (returning to the sewage plant following human consumption) will be calculated. The developed methodology will be applied to a case study of resource recovery from wastewater. In the aforementioned case study located in Italy, biochar or zeolite is to be used for recovery of P and N from domestic sewage through adsorption and thereafter, used as a slow-release fertilizer in agriculture. Using this model, information regarding the efficiency of nutrient recovery and application can be generated. This can help to optimize the recovery process and application of the nutrients. Consequently, this will help to optimize nutrient recovery and application and reduce the dependence of the food system on the virgin extraction of P and N.Keywords: circular economy, dynamic substance flow, nutrient cycles, resource recovery from water
Procedia PDF Downloads 1981074 Nursing Experience in the Intensive Care of a Lung Cancer Patient with Pulmonary Embolism on Extracorporeal Membrane Oxygenation
Authors: Huang Wei-Yi
Abstract:
Objective: This article explores the intensive care nursing experience of a lung cancer patient with pulmonary embolism who was placed on ECMO. Following a sudden change in the patient’s condition and a consensus reached during a family meeting, the decision was made to withdraw life-sustaining equipment and collaborate with the palliative care team. Methods: The nursing period was from October 20 to October 27, 2023. The author monitored physiological data, observed, provided direct care, conducted interviews, performed physical assessments, and reviewed medical records. Together with the critical care team and bypass personnel, a comprehensive assessment was conducted using Gordon's Eleven Functional Health Patterns to identify the patient’s health issues, which included pain related to lung cancer and invasive devices, fear of death due to sudden deterioration, and altered tissue perfusion related to hemodynamic instability. Results: The patient was admitted with fever, back pain, and painful urination. During hospitalization, the patient experienced sudden discomfort followed by cardiac arrest, requiring multiple CPR attempts and ECMO placement. A subsequent CT angiogram revealed a pulmonary embolism. The patient's condition was further complicated by severe pain due to compression fractures, and a diagnosis of terminal lung cancer was unexpectedly confirmed, leading to emotional distress and uncertainty about future treatment. Throughout the critical care process, ECMO was removed on October 24, stabilizing the patient’s body temperature between 36.5-37°C and maintaining a mean arterial pressure of 60-80 mmHg. Pain management, including Morphine 8mg in 0.9% N/S 100ml IV drip q6h PRN and Ultracet 37.5 mg/325 mg 1# PO q6h, kept the pain level below 3. The patient was transferred to the ward on October 27 and discharged home on October 30. Conclusion: During the care period, collaboration with the medical team and palliative care professionals was crucial. Adjustments to pain medication, symptom management, and lung cancer-targeted therapy improved the patient’s physical discomfort and pain levels. By applying the unique functions of nursing and the four principles of palliative care, positive encouragement was provided. Family members, along with social workers, clergy, psychologists, and nutritionists, participated in cross-disciplinary care, alleviating anxiety and fear. The consensus to withdraw ECMO and life-sustaining equipment enabled the patient and family to receive high-quality care and maintain autonomy in decision-making. A follow-up call on November 1 confirmed that the patient was emotionally stable, pain-free, and continuing with targeted lung cancer therapy.Keywords: intensive care, lung cancer, pulmonary embolism, ECMO
Procedia PDF Downloads 281073 Impact of Financial and Nutrition Support on Blood Health, Dietary Intake, and Well-Being among Female Student-Athletes
Authors: Kaila A. Vento
Abstract:
Within the field of sports science, financial situations have been reported as a key barrier in purchasing high-quality foods. A lack of proper nutrition leads to insecurities of health, impairs training, and diminishes optimal performances. Consequently, insufficient nutrient intake, disordered eating patterns, and eating disorders may arise, leading to poor health and well-being. Athletic scholarships, nutrition resources, and meal programs are available, yet are disproportionally allocated, favoring male sports, Caucasian athletes, and higher sport levels. Direct athlete finances towards nutrition at various sport levels and the role race influences aid received has yet to be examined. Additionally, a diverse female athlete population is missing in the sports science literature, specifically in nutrition. To address this gap, the current project assesses how financial and nutrition support and nutrition knowledge impacts physical health, dietary intake, and overall quality of life of a diverse sample of female athletes at the National Collegiate Athletic Association (NCAA), National Junior Collegiate Athletic Association (NJCAA), and cub sport levels. The project will identify differences in financial support in relation to race, as well. Approximately (N = 120) female athletes will participate in a single 30-minute lab visit. At this visit, body composition (i.e., height, weight, body mass index, and fat percent), blood health indicators (fasted blood glucose and lipids), and resting blood pressure are measured. In addition, three validated questionnaires pertaining to nutrition knowledge (Sports Nutrition Knowledge Questionnaire; SNKQ), dietary intake (Rapid Eating Assessment for Participants; REAP), and quality of life (World Health Organization Quality of Life Brief; WHOQL-B) are gathered. Body composition and blood health indicators will be compared with the results of self-reported sports nutrition knowledge, dietary intake, and quality of life questionnaires. It is hypothesized that 1) financial and nutrition support and nutrition knowledge will differ between the sport levels and 2) financial and nutrition support and nutrition knowledge will have a positive association with quality of dietary intake and blood health indicators, 3) financial and nutrition support will differ significantly among racial background across the various competition levels, and 4) dietary intake will influence blood health indicators and quality of life. The findings from this study could have positive implications on athletic associations' policies on equity of financial and nutrition support to improve the health and safety of all female athletes across several sport levels.Keywords: athlete, equity, finances, health, resources
Procedia PDF Downloads 1061072 Functional Surfaces and Edges for Cutting and Forming Tools Created Using Directed Energy Deposition
Authors: Michal Brazda, Miroslav Urbanek, Martina Koukolikova
Abstract:
This work focuses on the development of functional surfaces and edges for cutting and forming tools created through the Directed Energy Deposition (DED) technology. In the context of growing challenges in modern engineering, additive technologies, especially DED, present an innovative approach to manufacturing tools for forming and cutting. One of the key features of DED is its ability to precisely and efficiently deposit Fully dense metals from powder feedstock, enabling the creation of complex geometries and optimized designs. Gradually, it becomes an increasingly attractive choice for tool production due to its ability to achieve high precision while simultaneously minimizing waste and material costs. Tools created using DED technology gain significant durability through the utilization of high-performance materials such as nickel alloys and tool steels. For high-temperature applications, Nimonic 80A alloy is applied, while for cold applications, M2 tool steel is used. The addition of ceramic materials, such as tungsten carbide, can significantly increase the tool's resistance. The introduction of functionally graded materials is a significant contribution, opening up new possibilities for gradual changes in the mechanical properties of the tool and optimizing its performance in different sections according to specific requirements. In this work, you will find an overview of individual applications and their utilization in the industry. Microstructural analyses have been conducted, providing detailed insights into the structure of individual components alongside examinations of the mechanical properties and tool life. These analyses offer a deeper understanding of the efficiency and reliability of the created tools, which is a key element for successful development in the field of cutting and forming tools. The production of functional surfaces and edges using DED technology can result in financial savings, as the entire tool doesn't have to be manufactured from expensive special alloys. The tool can be made from common steel, onto which a functional surface from special materials can be applied. Additionally, it allows for tool repairs after wear and tear, eliminating the need for producing a new part and contributing to an overall cost while reducing the environmental footprint. Overall, the combination of DED technology, functionally graded materials, and verified technologies collectively set a new standard for innovative and efficient development of cutting and forming tools in the modern industrial environment.Keywords: additive manufacturing, directed energy deposition, DED, laser, cutting tools, forming tools, steel, nickel alloy
Procedia PDF Downloads 501071 Innovating Electronics Engineering for Smart Materials Marketing
Authors: Muhammad Awais Kiani
Abstract:
The field of electronics engineering plays a vital role in the marketing of smart materials. Smart materials are innovative, adaptive materials that can respond to external stimuli, such as temperature, light, or pressure, in order to enhance performance or functionality. As the demand for smart materials continues to grow, it is crucial to understand how electronics engineering can contribute to their marketing strategies. This abstract presents an overview of the role of electronics engineering in the marketing of smart materials. It explores the various ways in which electronics engineering enables the development and integration of smart features within materials, enhancing their marketability. Firstly, electronics engineering facilitates the design and development of sensing and actuating systems for smart materials. These systems enable the detection and response to external stimuli, providing valuable data and feedback to users. By integrating sensors and actuators into materials, their functionality and performance can be significantly enhanced, making them more appealing to potential customers. Secondly, electronics engineering enables the creation of smart materials with wireless communication capabilities. By incorporating wireless technologies such as Bluetooth or Wi-Fi, smart materials can seamlessly interact with other devices, providing real-time data and enabling remote control and monitoring. This connectivity enhances the marketability of smart materials by offering convenience, efficiency, and improved user experience. Furthermore, electronics engineering plays a crucial role in power management for smart materials. Implementing energy-efficient systems and power harvesting techniques ensures that smart materials can operate autonomously for extended periods. This aspect not only increases their market appeal but also reduces the need for constant maintenance or battery replacements, thus enhancing customer satisfaction. Lastly, electronics engineering contributes to the marketing of smart materials through innovative user interfaces and intuitive control mechanisms. By designing user-friendly interfaces and integrating advanced control systems, smart materials become more accessible to a broader range of users. Clear and intuitive controls enhance the user experience and encourage wider adoption of smart materials in various industries. In conclusion, electronics engineering significantly influences the marketing of smart materials by enabling the design of sensing and actuating systems, wireless connectivity, efficient power management, and user-friendly interfaces. The integration of electronics engineering principles enhances the functionality, performance, and marketability of smart materials, making them more adaptable to the growing demand for innovative and connected materials in diverse industries.Keywords: electronics engineering, smart materials, marketing, power management
Procedia PDF Downloads 591070 Wood Energy, Trees outside Forests and Agroforestry Wood Harvesting and Conversion Residues Preparing and Storing
Authors: Adeiza Matthew, Oluwadamilola Abubakar
Abstract:
Wood energy, also known as wood fuel, is a renewable energy source that is derived from woody biomass, which is organic matter that is harvested from forests, woodlands, and other lands. Woody biomass includes trees, branches, twigs, and other woody debris that can be used as fuel. Wood energy can be classified based on its sources, such as trees outside forests, residues from wood harvesting and conversion, and energy plantations. There are several policy frameworks that support the use of wood energy, including participatory forest management and agroforestry. These policies aim to promote the sustainable use of woody biomass as a source of energy while also protecting forests and wildlife habitats. There are several options for using wood as a fuel, including central heating systems, pellet-based systems, wood chip-based systems, log boilers, fireplaces, and stoves. Each of these options has its own benefits and drawbacks, and the most appropriate option will depend on factors such as the availability of woody biomass, the heating needs of the household or facility, and the local climate. In order to use wood as a fuel, it must be harvested and stored properly. Hardwood or softwood can be used as fuel, and the heating value of firewood depends on the species of tree and the degree of moisture content. Proper harvesting and storage of wood can help to minimize environmental impacts and improve wildlife habitats. The use of wood energy has several environmental impacts, including the release of greenhouse gases during combustion and the potential for air pollution from combustion by-products. However, wood energy can also have positive environmental impacts, such as the sequestration of carbon in trees and the reduction of reliance on fossil fuels. The regulation and legislation of wood energy vary by country and region, and there is an ongoing debate about the potential use of wood energy in renewable energy technologies. Wood energy is a renewable energy source that can be used to generate electricity, heat, and transportation fuels. Woody biomass is abundant and widely available, making it a potentially significant source of energy for many countries. The use of wood energy can create local economic and employment opportunities, particularly in rural areas. Wood energy can be used to reduce reliance on fossil fuels and reduce greenhouse gas emissions. Properly managed forests can provide a sustained supply of woody biomass for energy, helping to reduce the risk of deforestation and habitat loss. Wood energy can be produced using a variety of technologies, including direct combustion, co-firing with fossil fuels, and the production of biofuels. The environmental impacts of wood energy can be minimized through the use of best practices in harvesting, transportation, and processing. Wood energy is regulated and legislated at the national and international levels, and there are various standards and certification systems in place to promote sustainable practices. Wood energy has the potential to play a significant role in the transition to a low-carbon economy and the achievement of climate change mitigation goals.Keywords: biomass, timber, charcoal, firewood
Procedia PDF Downloads 1001069 Telepsychiatry for Asian Americans
Authors: Jami Wang, Brian Kao, Davin Agustines
Abstract:
COVID-19 highlighted the active discrimination against the Asian American population easily seen through media, social tension, and increased crimes against the specific population. It is well known that long-term racism can also have a large impact on both emotional and psychological well-being. However, the healthcare disparity during this time also revealed how the Asian American community lacked the research data, political support, and medical infrastructure for this particular population. During a time when Asian American fear for safety with decreasing mental health, telepsychiatry is particularly promising. COVID-19 demonstrated how well psychiatry could integrate with telemedicine, with psychiatry being the second most utilized telemedicine visits. However, the Asian American community did not utilize the telepsychiatry resources as much as other groups. Because of this, we wanted to understand why the patient population who was affected the most by COVID-19 mentally did not seek out care. To do this, we decided to study the top top telepsychiatry platforms. The current top telepsychiatry companies in the United States include Teladoc and BetterHelp. In the Teladoc mental health sector, they only had 4 available languages (English, Spanish, French, and Danis,) with none of them being an Asian language. In a similar manner, Teladoc’s top competitor in the telepsychiatry space, BetterHelp, only listed a total of 3 Asian languages, including Mandarin, Japanese, and Malaysian. However, this is still a short list considering they have over 20 languages available. The shortage of available physicians that speak multiple languages is concerning, as it could be difficult for the Asian American community to relate with. There are limited mental health resources that cater to their likely cultural needs, further exacerbating the structural racism and institutional barriers to appropriate care. It is important to note that these companies do provide interpreters to comply with the nondiscrimination and language assistance federal law. However, interactions with an interpreter are not only more time-consuming but also less personal than talking directly with a physician. Psychiatry is the field that emphasizes interpersonal relationships. The trust between a physician and the patient is critical in developing patient rapport to guide in better understanding the clinical picture and treating the patient appropriately. The language barrier creates an additional barrier between the physician and patient. Because Asian Americans are one of the largest growing patient population bases, these telehealth companies have much to gain by catering to the Asian American market. Without providing adequate access to bilingual and bicultural physicians, the current system will only further exacerbate the growing disparity. The healthcare community and telehealth companies need to recognize that the Asian American population is a severely underserved population in mental health and has much to gain from telepsychiatry. The lack of language is one of many reasons why there is a disparity for Asian Americans in the mental health space.Keywords: telemedicine, psychiatry, Asian American, disparity
Procedia PDF Downloads 1051068 The Dynamics of a Droplet Spreading on a Steel Surface
Authors: Evgeniya Orlova, Dmitriy Feoktistov, Geniy Kuznetsov
Abstract:
Spreading of a droplet over a solid substrate is a key phenomenon observed in the following engineering applications: thin film coating, oil extraction, inkjet printing, and spray cooling of heated surfaces. Droplet cooling systems are known to be more effective than film or rivulet cooling systems. It is caused by the greater evaporation surface area of droplets compared with the film of the same mass and wetting surface. And the greater surface area of droplets is connected with the curvature of the interface. Location of the droplets on the cooling surface influences on the heat transfer conditions. The close distance between the droplets provides intensive heat removal, but there is a possibility of their coalescence in the liquid film. The long distance leads to overheating of the local areas of the cooling surface and the occurrence of thermal stresses. To control the location of droplets is possible by changing the roughness, structure and chemical composition of the surface. Thus, control of spreading can be implemented. The most important characteristic of spreading of droplets on solid surfaces is a dynamic contact angle, which is a function of the contact line speed or capillary number. However, there is currently no universal equation, which would describe the relationship between these parameters. This paper presents the results of the experimental studies of water droplet spreading on metal substrates with different surface roughness. The effect of the droplet growth rate and the surface roughness on spreading characteristics was studied at low capillary numbers. The shadow method using high speed video cameras recording up to 10,000 frames per seconds was implemented. A droplet profile was analyzed by Axisymmetric Drop Shape Analyses techniques. According to change of the dynamic contact angle and the contact line speed three sequential spreading stages were observed: rapid increase in the dynamic contact angle; monotonous decrease in the contact angle and the contact line speed; and form of the equilibrium contact angle at constant contact line. At low droplet growth rate, the dynamic contact angle of the droplet spreading on the surfaces with the maximum roughness is found to increase throughout the spreading time. It is due to the fact that the friction force on such surfaces is significantly greater than the inertia force; and the contact line is pinned on microasperities of a relief. At high droplet growth rate the contact angle decreases during the second stage even on the surfaces with the maximum roughness, as in this case, the liquid does not fill the microcavities, and the droplet moves over the “air cushion”, i.e. the interface is a liquid/gas/solid system. Also at such growth rates pulsation of liquid flow was detected; and the droplet oscillates during the spreading. Thus, obtained results allow to conclude that it is possible to control spreading by using the surface roughness and the growth rate of droplets on surfaces as varied factors. Also, the research findings may be used for analyzing heat transfer in rivulet and drop cooling systems of high energy equipment.Keywords: contact line speed, droplet growth rate, dynamic contact angle, shadow system, spreading
Procedia PDF Downloads 3301067 Experimental Uniaxial Tensile Characterization of One-Dimensional Nickel Nanowires
Authors: Ram Mohan, Mahendran Samykano, Shyam Aravamudhan
Abstract:
Metallic nanowires with sub-micron and hundreds of nanometer diameter have a diversity of applications in nano/micro-electromechanical systems (NEMS/MEMS). Characterizing the mechanical properties of such sub-micron and nano-scale metallic nanowires are tedious; require sophisticated and careful experimentation to be performed within high-powered microscopy systems (scanning electron microscope (SEM), atomic force microscope (AFM)). Also, needed are nanoscale devices for placing the nanowires; loading them with the intended conditions; obtaining the data for load–deflection during the deformation within the high-powered microscopy environment poses significant challenges. Even picking the grown nanowires and placing them correctly within a nanoscale loading device is not an easy task. Mechanical characterizations through experimental methods for such nanowires are still very limited. Various techniques at different levels of fidelity, resolution, and induced errors have been attempted by material science and nanomaterial researchers. The methods for determining the load, deflection within the nanoscale devices also pose a significant problem. The state of the art is thus still at its infancy. All these factors result and is seen in the wide differences in the characterization curves and the reported properties in the current literature. In this paper, we discuss and present our experimental method, results, and discussions of uniaxial tensile loading and the development of subsequent stress–strain characteristics curves for Nickel nanowires. Nickel nanowires in the diameter range of 220–270 nm were obtained in our laboratory via an electrodeposition method, which is a solution based, template method followed in our present work for growing 1-D Nickel nanowires. Process variables such as the presence of magnetic field, its intensity; and varying electrical current density during the electrodeposition process were found to influence the morphological and physical characteristics including crystal orientation, size of the grown nanowires1. To further understand the correlation and influence of electrodeposition process variables, associated formed structural features of our grown Nickel nanowires to their mechanical properties, careful experiments within scanning electron microscope (SEM) were conducted. Details of the uniaxial tensile characterization, testing methodology, nanoscale testing device, load–deflection characteristics, microscopy images of failure progression, and the subsequent stress–strain curves are discussed and presented.Keywords: uniaxial tensile characterization, nanowires, electrodeposition, stress-strain, nickel
Procedia PDF Downloads 4061066 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning
Authors: Saahith M. S., Sivakami R.
Abstract:
In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis
Procedia PDF Downloads 381065 From Battles to Balance and Back: Document Analysis of EU Copyright in the Digital Era
Authors: Anette Alén
Abstract:
Intellectual property (IP) regimes have traditionally been designed to integrate various conflicting elements stemming from private entitlement and the public good. In IP laws and regulations, this design takes the form of specific uses of protected subject-matter without the right-holder’s consent, or exhaustion of exclusive rights upon market release, and the like. More recently, the pursuit of ‘balance’ has gained ground in the conceptualization of these conflicting elements both in terms of IP law and related policy. This can be seen, for example, in European Union (EU) copyright regime, where ‘balance’ has become a key element in argumentation, backed up by fundamental rights reasoning. This development also entails an ever-expanding dialogue between the IP regime and the constitutional safeguards for property, free speech, and privacy, among others. This study analyses the concept of ‘balance’ in EU copyright law: the research task is to examine the contents of the concept of ‘balance’ and the way it is operationalized and pursued, thereby producing new knowledge on the role and manifestations of ‘balance’ in recent copyright case law and regulatory instruments in the EU. The study discusses two particular pieces of legislation, the EU Digital Single Market (DSM) Copyright Directive (EU) 2019/790 and the finalized EU Artificial Intelligence (AI) Act, including some of the key preparatory materials, as well as EU Court of Justice (CJEU) case law pertaining to copyright in the digital era. The material is examined by means of document analysis, mapping the ways ‘balance’ is approached and conceptualized in the documents. Similarly, the interaction of fundamental rights as part of the balancing act is also analyzed. Doctrinal study of law is also employed in the analysis of legal sources. This study suggests that the pursuit of balance is, for its part, conducive to new battles, largely due to the advancement of digitalization and more recent developments in artificial intelligence. Indeed, the ‘balancing act’ rather presents itself as a way to bypass or even solidify some of the conflicting interests in a complex global digital economy. Indeed, such a conceptualization, especially when accompanied by non-critical or strategically driven fundamental rights argumentation, runs counter to the genuine acknowledgment of new types of conflicting interests in the copyright regime. Therefore, a more radical approach, including critical analysis of the normative basis and fundamental rights implications of the concept of ‘balance’, is required to readjust copyright law and regulations for the digital era. Notwithstanding the focus on executing the study in the context of the EU copyright regime, the results bear wider significance for the digital economy, especially due to the platform liability regime in the DSM Directive and with the AI Act including objectives of a ‘level playing field’ whereby compliance with EU copyright rules seems to be expected among system providers.Keywords: balance, copyright, fundamental rights, platform liability, artificial intelligence
Procedia PDF Downloads 311064 The Use of Image Analysis Techniques to Describe a Cluster Cracks in the Cement Paste with the Addition of Metakaolinite
Authors: Maciej Szeląg, Stanisław Fic
Abstract:
The impact of elevated temperatures on the construction materials manifests in change of their physical and mechanical characteristics. Stresses and thermal deformations that occur inside the volume of the material cause its progressive degradation as temperature increase. Finally, the reactions and transformations of multiphase structure of cementitious composite cause its complete destruction. A particularly dangerous phenomenon is the impact of thermal shock – a sudden high temperature load. The thermal shock leads to a high value of the temperature gradient between the outer surface and the interior of the element in a relatively short time. The result of mentioned above process is the formation of the cracks and scratches on the material’s surface and inside the material. The article describes the use of computer image analysis techniques to identify and assess the structure of the cluster cracks on the surfaces of modified cement pastes, caused by thermal shock. Four series of specimens were tested. Two Portland cements were used (CEM I 42.5R and CEM I 52,5R). In addition, two of the series contained metakaolinite as a replacement for 10% of the cement content. Samples in each series were made in combination of three w/b (water/binder) indicators of respectively 0.4; 0.5; 0.6. Surface cracks of the samples were created by a sudden temperature load at 200°C for 4 hours. Images of the cracked surfaces were obtained via scanning at 1200 DPI; digital processing and measurements were performed using ImageJ v. 1.46r software. In order to examine the cracked surface of the cement paste as a system of closed clusters – the dispersal systems theory was used to describe the structure of cement paste. Water is used as the dispersing phase, and the binder is used as the dispersed phase – which is the initial stage of cement paste structure creation. A cluster itself is considered to be the area on the specimen surface that is limited by cracks (created by sudden temperature loading) or by the edge of the sample. To describe the structure of cracks two stereological parameters were proposed: A ̅ – the cluster average area, L ̅ – the cluster average perimeter. The goal of this study was to compare the investigated stereological parameters with the mechanical properties of the tested specimens. Compressive and tensile strength testes were carried out according to EN standards. The method used in the study allowed the quantitative determination of defects occurring in the examined modified cement pastes surfaces. Based on the results, it was found that the nature of the cracks depends mainly on the physical parameters of the cement and the intermolecular interactions on the dispersal environment. Additionally, it was noted that the A ̅/L ̅ relation of created clusters can be described as one function for all tested samples. This fact testifies about the constant geometry of the thermal cracks regardless of the presence of metakaolinite, the type of cement and the w/b ratio.Keywords: cement paste, cluster cracks, elevated temperature, image analysis, metakaolinite, stereological parameters
Procedia PDF Downloads 388