Search results for: multiple pins
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4762

Search results for: multiple pins

652 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics

Authors: Weikang Gong, Chunhua Li

Abstract:

Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.

Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure

Procedia PDF Downloads 121
651 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 95
650 Team Teaching versus Traditional Pedagogical Method

Authors: L. M. H. Mustonen, S. A. Heikkilä

Abstract:

The focus of the paper is to describe team teaching as a HAMK’s pedagogical method, and its impacts to the teachers work. Background: Traditionally it is thought that teaching is a job where one mostly works alone. More and more teachers feel that their work is getting more stressful. Solutions to these problems have been sought in Häme University of Applied sciences’ (From now on referred to as HAMK). HAMK has made a strategic change to move to the group oriented working of teachers. Instead of isolated study courses, there are now larger 15 credits study modules. Implementation: As examples of the method, two cases are presented: technical project module and summer studies module, which was integrated into the EU development project called Energy Efficiency with Precise Control. In autumn 2017, technical project will be implemented third time. There are at least three teachers involved in it and it is the first module of the new students. Main focus is to learn the basic skills of project working. From communicational viewpoint, they learn the basics of written and oral reporting and the basics of video reporting skills. According to our quality control system, the need for the development is evaluated in the end of the module. There are always some differences in each implementation but the basics are the same. The other case summer studies 2017 is new and part of a larger EU project. For the first time, we took a larger group of first to third year students from different study programmes to the summer studies. The students learned professional skills and also skills from different fields of study, international cooperation, and communication skills. Benefits and challenges: After three years, it is possible to consider what the changes mean in the everyday work of the teachers - and of course – what it means to students and the learning process. The perspective is HAMK’s electrical and automation study programme: At first, the change always means more work. The routines born after many years and the course material used for years may not be valid anymore. Teachers are teaching in modules simultaneously and often with some subjects overlapping. Finding the time to plan the modules together is often difficult. The essential benefit is that the learning outcomes have improved. This can be seen in the feedback given by both the teachers and the students. Conclusions: A new type of working environment is being born. A team of teachers designs a module that matches the objectives and ponders the answers to such questions as what are the knowledge-based targets of the module? Which pedagogical solutions will achieve the desired results? At what point do multiple teachers instruct the class together? How is the module evaluated? How can the module be developed further for the next execution? The team discusses openly and finds the solutions. Collegiate responsibility and support are always present. These are strengthening factors of the new communal university teaching culture. They are also strong sources of pleasure of work.

Keywords: pedagogical development, summer studies, team teaching, well-being at work

Procedia PDF Downloads 109
649 Molecular Detection of mRNA bcr-abl and Circulating Leukemic Stem Cells CD34+ in Patients with Acute Lymphoblastic Leukemia and Chronic Myeloid Leukemia and Its Association with Clinical Parameters

Authors: B. Gonzalez-Yebra, H. Barajas, P. Palomares, M. Hernandez, O. Torres, M. Ayala, A. L. González, G. Vazquez-Ortiz, M. L. Guzman

Abstract:

Leukemia arises by molecular alterations of the normal hematopoietic stem cell (HSC) transforming it into a leukemic stem cell (LSC) with high cell proliferation, self-renewal, and cell differentiation. Chronic myeloid leukemia (CML) originates from an LSC-leading to elevated proliferation of myeloid cells and acute lymphoblastic leukemia (ALL) originates from an LSC development leading to elevated proliferation of lymphoid cells. In both cases, LSC can be identified by multicolor flow cytometry using several antibodies. However, to date, LSC levels in peripheral blood (PB) are not established well enough in ALL and CML patients. On the other hand, the detection of the minimal residue disease (MRD) in leukemia is mainly based on the identification of the mRNA bcr-abl gene in CML patients and some other genes in ALL patients. There is no a properly biomarker to detect MDR in both types of leukemia. The objective of this study was to determine mRNA bcr-abl and the percentage of LSC in peripheral blood of patients with CML and ALL and identify a possible association between the amount of LSC in PB and clinical data. We included in this study 19 patients with Leukemia. A PB sample was collected per patient and leukocytes were obtained by Ficoll gradient. The immunophenotype for LSC CD34+ was done by flow cytometry analysis with CD33, CD2, CD14, CD16, CD64, HLA-DR, CD13, CD15, CD19, CD10, CD20, CD34, CD38, CD71, CD90, CD117, CD123 monoclonal antibodies. In addition, to identify the presence of the mRNA bcr-abl by RT-PCR, the RNA was isolated using TRIZOL reagent. Molecular (presence of mRNA bcr-abl and LSC CD34+) and clinical results were analyzed with descriptive statistics and a multiple regression analysis was performed to determine statistically significant association. In total, 19 patients (8 patients with ALL and 11 patients with CML) were analyzed, 9 patients with de novo leukemia (ALL = 6 and CML = 3) and 10 under treatment (ALL = 5 and CML = 5). The overall frequency of mRNA bcr-abl was 31% (6/19), and it was negative in ALL patients and positive in 80% in CML patients. On the other hand, LSC was determined in 16/19 leukemia patients (%LSC= 0.02-17.3). The Novo patients had higher percentage of LSC (0.26 to 17.3%) than patients under treatment (0 to 5.93%). The amount of LSC was significantly associated with the amount of LSC were: absence of treatment, the absence of splenomegaly, and a lower number of leukocytes, negative association for the clinical variables age, sex, blasts, and mRNA bcr-abl. In conclusion, patients with de novo leukemia had a higher percentage of circulating LSC than patients under treatment, and it was associated with clinical parameters as lack of treatment, absence of splenomegaly and a lower number of leukocytes. The mRNA bcr-abl detection was only possible in the series of patients with CML, and molecular detection of LSC could be identified in the peripheral blood of all leukemia patients, we believe the identification of circulating LSC may be used as biomarker for the detection of the MRD in leukemia patients.

Keywords: stem cells, leukemia, biomarkers, flow cytometry

Procedia PDF Downloads 356
648 Understanding Everyday Insecurities Emerging from Fragmented Territorial Control in Post-Accord Colombia

Authors: Clara Voyvodic

Abstract:

Transitions from conflict to peace are by no means smooth nor linear, particularly from the perspective of those living through them. Over the last few decades, the changing focus in peacebuilding studies has come to appreciate the everyday experience of communities and how that provides a lens through which the relative success or efficacy of these transitions can be understood. In particular, the demobilization of a significant conflict actor is not without consequences, not just for the macro-view of state stabilization and peace, but for the communities who find themselves without a clear authority of territorial control. In Colombia, the demobilization and disarmament of the FARC guerilla group provided a brief respite to the conflict and a major political win for President Manuel Santos. However, this victory has proven short-lived. Drawing from extensive field research in Colombia within the last year, including interviews with local communities and actors operating in these regions, field observations, and other primary resources, this paper examines the post-accord transitions in Colombia and the everyday security experiences of local communities in regions formerly controlled by the FARC. In order to do so, the research focused on a semi-ethnographic approach in the northern region of the department of Antioquia and the coastal area of the border department of Nariño that documented how individuals within these marginalized communities have come to understand and negotiate their security in the years following the accord and the demobilization of the FARC. This presentation will argue that the removal of the FARC as an informal governance actor opened a space for multiple actors to attempt to control the same territory, including the state. This shift has had a clear impact on the everyday security experiences of the local communities. With an exploration of the dynamics of local governance and its impact on lived security experiences, this research seeks to demonstrate how distinct patterns of armed group behavior are emerging not only from a vacuum of control left by the FARC but from an increase in state presence that nonetheless remains inconsistent and unpersuasive as a monopoly of force in the region. The increased multiplicity of actors, particularly the state, has meant that the normal (informal) rules for communities to navigate these territories are no longer in play as the identities, actions, and intentions of different competing groups have become frustratingly opaque. This research provides a prescient analysis on how the shifting dynamics of territorial control in a post-peace accord landscape produce uncertain realities that affect the daily lives of the local communities and endanger the long-term prospect of human-centered security.

Keywords: armed actors, conflict transitions, informal governance, post-accord, security experiences

Procedia PDF Downloads 132
647 Social Ties and the Prevalence of Single Chronic Morbidity and Multimorbidity among the Elderly Population in Selected States of India

Authors: Sree Sanyal

Abstract:

Research in ageing often highlights the age-related health dimension more than the psycho-social characteristics of the elderly, which also influences and challenges the health outcomes. Multimorbidity is defined as the person having more than one chronic non-communicable diseases and their prevalence increases with ageing. The study aims to evaluate the influence of social ties on self-reported prevalence of multimorbidity (selected chronic non-communicable diseases) among the selected states of elderly population in India. The data is accessed from Building Knowledge Base on Population Ageing in India (BKPAI), collected in 2011 covering the self-reported chronic non-communicable diseases like arthritis, heart disease, diabetes, lung disease with asthma, hypertension, cataract, depression, dementia, Alzheimer’s disease, and cancer. The data of the above diseases were taken together and categorized as: ‘no disease’, ‘one disease’ and ‘multimorbidity’. The predicted variables were demographic, socio-economic, residential types, and the variable of social ties includes social support, social engagement, perceived support, connectedness, and importance of the elderly. Predicted probability for multiple logistic regression was used to determine the background characteristics of the old in association with chronic morbidities showing multimorbidity. The finding suggests that 24.35% of the elderly are suffering from multimorbidity. Research shows that with reference to ‘no disease’, according to the socio-economic characteristics of the old, the female oldest old (80+) from others in caste and religion, widowed, never had any formal education, ever worked in their life, coming from the second wealth quintile standard, from rural Maharashtra are more prone with ‘one disease’. From the social ties background, the elderly who perceives they are important to the family, after getting older their decision-making status has been changed, prefer to stay with son and spouse only, satisfied with the communication from their children are more likely to have less single morbidity and the results are significant. Again, with respect to ‘no disease’, the female oldest old (80+), who are others in caste, Christian in religion, widowed, having less than 5 years of education completed, ever worked, from highest wealth quintile, residing in urban Kerala are more associated with multimorbidity. The elderly population who are more socially connected through family visits, public gatherings, gets support in decision making, who prefers to spend their later years with son and spouse only but stays alone shows lesser prevalence of multimorbidity. In conclusion, received and perceived social integration and support from associated neighborhood in the older days, knowing about their own needs in life facilitates better health and wellbeing of the elderly population in selected states of India.

Keywords: morbidity, multi-morbidity, prevalence, social ties

Procedia PDF Downloads 121
646 Video Analytics on Pedagogy Using Big Data

Authors: Jamuna Loganath

Abstract:

Education is the key to the development of any individual’s personality. Today’s students will be tomorrow’s citizens of the global society. The education of the student is the edifice on which his/her future will be built. Schools therefore should provide an all-round development of students so as to foster a healthy society. The behaviors and the attitude of the students in school play an essential role for the success of the education process. Frequent reports of misbehaviors such as clowning, harassing classmates, verbal insults are becoming common in schools today. If this issue is left unattended, it may develop a negative attitude and increase the delinquent behavior. So, the need of the hour is to find a solution to this problem. To solve this issue, it is important to monitor the students’ behaviors in school and give necessary feedback and mentor them to develop a positive attitude and help them to become a successful grownup. Nevertheless, measuring students’ behavior and attitude is extremely challenging. None of the present technology has proven to be effective in this measurement process because actions, reactions, interactions, response of the students are rarely used in the course of the data due to complexity. The purpose of this proposal is to recommend an effective supervising system after carrying out a feasibility study by measuring the behavior of the Students. This can be achieved by equipping schools with CCTV cameras. These CCTV cameras installed in various schools of the world capture the facial expressions and interactions of the students inside and outside their classroom. The real time raw videos captured from the CCTV can be uploaded to the cloud with the help of a network. The video feeds get scooped into various nodes in the same rack or on the different racks in the same cluster in Hadoop HDFS. The video feeds are converted into small frames and analyzed using various Pattern recognition algorithms and MapReduce algorithm. Then, the video frames are compared with the bench marking database (good behavior). When misbehavior is detected, an alert message can be sent to the counseling department which helps them in mentoring the students. This will help in improving the effectiveness of the education process. As Video feeds come from multiple geographical areas (schools from different parts of the world), BIG DATA helps in real time analysis as it analyzes computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. It also analyzes data that can’t be analyzed by traditional software applications such as RDBMS, OODBMS. It has also proven successful in handling human reactions with ease. Therefore, BIG DATA could certainly play a vital role in handling this issue. Thus, effectiveness of the education process can be enhanced with the help of video analytics using the latest BIG DATA technology.

Keywords: big data, cloud, CCTV, education process

Procedia PDF Downloads 240
645 The Role of Community Beliefs and Practices on the Spread of Ebola in Uganda, September 2022

Authors: Helen Nelly Naiga, Jane Frances Zalwango, Saudah N. Kizito, Brian Agaba, Brenda N Simbwa, Maria Goretti Zalwango, Richard Migisha, Benon Kwesiga, Daniel Kadobera, Alex Ario Riolexus, Sarah Paige, Julie R. Harris

Abstract:

Background: Traditional community beliefs and practices can facilitate the spread of Ebola virus during outbreaks. On September 20, 2022, Uganda declared a Sudan Virus Disease (SVD) outbreak after a case was confirmed in Mubende District. During September–November 2022, the outbreak spread to eight additional districts. We investigated the role of community beliefs and practices in the spread of SUDV in Uganda in 2022. Methods: A qualitative study was conducted in Mubende, Kassanda, and Kyegegwa districts in February 2023. We conducted nine focus group discussions (FGDs) and six key informant interviews (KIIs). FGDs included SVD survivors, household members of SVD patients, traditional healers, religious leaders, and community leaders. Key informants included community, political, and religious leaders, traditional healers, and health workers. We asked about community beliefs and practices to understand if and how they contributed to the spread of SUDV. Interviews were recorded, translated, transcribed, and analyzed thematically. Results: Frequently-reported themes included beliefs that the community deaths, later found to be due to SVD, were the result of witchcraft or poisoning. Key informants reported that SVD patients frequently first consulted traditional healers or spiritual leaders before seeking formal healthcare, and noted that traditional healers treated patients with signs and symptoms of SVD without protective measures. Additional themes included religious leaders conducting laying-on-of-hands prayers for SVD patients and symptomatic contacts, SVD patients and their symptomatic contacts hiding in friends’ homes, and exhumation of SVD patients originally buried in safe and dignified burials, to enable traditional burials. Conclusion: Multiple community beliefs and practices likely promoted SVD outbreak spread during the 2022 outbreak in Uganda. Engaging traditional and spiritual healers early during similar outbreaks through risk communication and community engagement efforts could facilitate outbreak control. Targeted community messaging, including clear biological explanations for clusters of deaths and information on the dangers of exhuming bodies of SVD patients, could similarly facilitate improved control in future outbreaks in Uganda.

Keywords: Ebola, Sudan virus, outbreak, beliefs, traditional

Procedia PDF Downloads 55
644 Effective Service Provision and Multi-Agency Working in Service Providers for Children and Young People with Special Educational Needs and Disabilities: A Mixed Methods Systematic Review

Authors: Natalie Tyldesley-Marshall, Janette Parr, Anna Brown, Yen-Fu Chen, Amy Grove

Abstract:

It is widely recognised in policy and research that the provision of services for children and young people (CYP) with Special Educational Needs and Disabilities (SEND) is enhanced when health and social care, and education services collaborate and interact effectively. In the UK, there have been significant changes to policy and provisions which support and improve collaboration. However, professionals responsible for implementing these changes face multiple challenges, including a lack of specific implementation guidance or framework to illustrate how effective multi-agency working could or should work. This systematic review will identify the key components of effective multi-agency working in services for CYP with SEND; and the most effective forms of partnership working in this setting. The review highlights interventions that lead to service improvements; and the conditions in the local area that support and encourage success. A protocol was written and registered with PROSPERO registration: CRD42022352194. Searches were conducted on several health, care, education, and applied social science databases from the year 2012 onwards. Citation chaining has been undertaken, as well as broader grey literature searching to enrich the findings. Qualitative, quantitative, mixed methods studies and systematic reviews were included, assessed independently, and critically appraised or assessed for risk of bias using appropriate tools based on study design. Data were extracted in NVivo software and checked by a more experienced researcher. A convergent segregated approach to synthesis and integration was used in which the quantitative and qualitative data were synthesised independently and then integrated using a joint display integration matrix. Findings demonstrate the key ingredients for effective partnership working for services delivering SEND. Interventions deemed effective are described, and lessons learned across interventions are summarised. Results will be of interest to educators and health and social care professionals that provide services to those with SEND. These will also be used to develop policy recommendations for how UK healthcare, social care, and education services for CYP with SEND aged 0-25 can most effectively collaborate and achieve service improvement. The review will also identify any gaps in the literature to recommend areas for future research. Funding for this review was provided by the Department for Education.

Keywords: collaboration, joint commissioning, service delivery, service improvement

Procedia PDF Downloads 107
643 A Comparison between Five Indices of Overweight and Their Association with Myocardial Infarction and Death, 28-Year Follow-Up of 1000 Middle-Aged Swedish Employed Men

Authors: Lennart Dimberg, Lala Joulha Ian

Abstract:

Introduction: Overweight (BMI 25-30) and obesity (BMI 30+) have consistently been associated with cardiovascular (CV) risk and death since the Framingham heart study in 1948, and BMI was included in the original Framingham risk score (FRS). Background: Myocardial infarction (MI) poses a serious threat to the patient's life. In addition to BMI, several other indices of overweight have been presented and argued to replace FRS as more relevant measures of CV risk. These indices include waist circumference (WC), waist/hip ratio (WHR), sagittal abdominal diameter (SAD), and sagittal abdominal diameter to height (SADHtR). Specific research question: The research question of this study is to evaluate the interrelationship between the various body measurements, BMI, WC, WHR, SAD, and SADHtR, and which measurement is strongly associated with MI and death. Methods: In 1993, 1,000 middle-aged Caucasian, randomly selected working men of the Swedish Volvo-Renault cohort were surveyed at a nurse-led health examination with a questionnaire, EKG, laboratory tests, blood pressure, height, weight, waist, and sagittal abdominal diameter measurements. Outcome data of myocardial infarction over 28 years come from Swedeheart (the Swedish national myocardial infarction registry) and the Swedish death registry. The Aalen-Johansen and Kaplan–Meier methods were used to estimate the cumulative incidences of MI and death. Multiple logistic regression analyses were conducted to compare BMI with the other four body measurements. The risk for the various measures of obesity was calculated with outcomes of accumulated first-time myocardial infarction and death as odds ratios (OR) in quartiles. The ORs between the 4th and the 1st quartile of each measure were calculated to estimate the association between the body measurement variables and the probability of cumulative incidences of myocardial infarction (MI) over time. Double-sided P values below 0.05 will be considered statistically significant. Unadjusted odds ratios were calculated for obesity indicators, MI, and death. Adjustments for age, diabetes, SBP, and the ratio of total cholesterol/HDL-C and blue/white collar status were performed. Results: Out of 1000 people, 959 subjects had full information about the five different body measurements. Of those, 90 participants had a first MI, and 194 persons died. The study showed that there was a high and significant correlation between the five different body measurements, and they were all associated with CVD risk factors. All body measurements were significantly associated with MI, with the highest (OR=3.6) seen for SADHtR and WC. After adjustment, all but SADHtR remained significant with weaker ORs. As for all-cause mortality, WHR (OR=1.7), SAD (OR=1.9), and SADHtR (OR=1.6) were significantly associated, but not WC and BMI. However, after adjustment, only WHR and SAD were significantly associated with death, but with attenuated ORs.

Keywords: BMI, death, epidemiology, myocardial infarction, risk factor, sagittal abdominal diameter, sagittal abdominal diameter to height, waist circumference, waist-hip ratio

Procedia PDF Downloads 96
642 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs

Authors: Michela Quadrini

Abstract:

Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.

Keywords: chord diagrams, linear chord diagram, equivalence class, topological language

Procedia PDF Downloads 201
641 Determinants of Healthcare Team Effectiveness in Subterranean Settings: A Mixed-Methods Study

Authors: Nasra Idilbi, Jalal Tarabeia, Layalleh Masalha, Heiam Shoufani Kassis, Gizell Green

Abstract:

Background: Healthcare professionals working in underground facilities face unique challenges affecting their physical and mental health and team effectiveness. We aimed to examine how an underground work environment affects the physical and mental health and effectiveness of a multi-professional medical team in a medical center under continuous war threats and the contribution of various demographic and professional characteristics. Methods: A cross-sectional survey was disseminated electronically. The questionnaire assessed team effectiveness, the quality of the work, and the health symptoms reported by the team while working in the underground complex. Results: In total, 270 healthcare workers (mean age 40 years, 75.6% females, 88.4% nurses) completed the questionnaire. Women reported statistically significantly higher mean scores of physical strain, fatigue, and eye irritation associated with the work environment compared to men. Multiple regression analysis revealed that psychological distress, noise, and lighting in the underground compound significantly influenced team effectiveness. The qualitative analysis revealed two key themes: the mental health impact of working in an underground environment and the effects of noise and lighting on staff performance. Nurses reported feelings of suffocation, claustrophobia, and difficulty concentrating due to the enclosed space, with some expressing heightened stress levels that impaired their ability to work effectively and safely. Female staff reported more pronounced symptoms of physical strain, fatigue, and eye irritation. Additionally, the underground complex’s poor noise absorption created a highly disruptive work environment, while inadequate lighting hindered accurate patient assessments, leading to potential errors. These challenges were exacerbated by physical symptoms like headaches and nausea, which further impacted job performance. The findings underscore the significant role of environmental factors in influencing both mental health and operational effectiveness, aligning with quantitative data on the predictors of team performance. Conclusions: The underground work environment is crucial in influencing healthcare team effectiveness, with psychological distress, noise, and lighting as key factors. The study highlights the importance of creating a comfortable work environment to foster team efficiency. The findings provide valuable insights for managers in underground healthcare facilities to optimize team performance and well-being.

Keywords: team effectiveness, underground settings, healthcare, environmental factors, a mixed-methods study

Procedia PDF Downloads 2
640 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 160
639 The Medical Student Perspective on the Role of Doubt in Medical Education

Authors: Madhavi-Priya Singh, Liam Lowe, Farouk Arnaout, Ludmilla Pillay, Giordan Perez, Luke Mischker, Steve Costa

Abstract:

Introduction: An Emergency Department consultant identified the failure of medical students to complete the task of clerking a patient in its entirety. As six medical students on our first clinical placement, we recognised our own failure and endeavored to examine why this failure was consistent among all medical students that had been given this task, despite our best motivations as adult learners. Aim: Our aim is to understand and investigate the elements which impeded our ability to learn and perform as medical students in the clinical environment, with reference to the prescribed task. We also aim to generate a discussion around the delivery of medical education with potential solutions to these barriers. Methods: Six medical students gathered together to have a comprehensive reflective discussion to identify possible factors leading to the failure of the task. First, we thoroughly analysed the delivery of the instructions with reference to the literature to identify potential flaws. We then examined personal, social, ethical, and cultural factors which may have impacted our ability to complete the task in its entirety. Results: Through collation of our shared experiences, with support from discussion in the field of medical education and ethics, we identified two major areas that impacted our ability to complete the set task. First, we experienced an ethical conflict where we believed the inconvenience and potential harm inflicted on patients did not justify the positive impact the patient interaction would have on our medical learning. Second, we identified a lack of confidence stemming from multiple factors, including the conflict between preclinical and clinical learning, perceptions of perfectionism in the culture of medicine, and the influence of upward social comparison. Discussion: After discussions, we found that the various factors we identified exacerbated the fears and doubts we already had about our own abilities and that of the medical education system. This doubt led us to avoid completing certain aspects of the tasks that were prescribed and further reinforced our vulnerability and perceived incompetence. Exploration of philosophical theories identified the importance of the role of doubt in education. We propose the need for further discussion around incorporating both pedagogic and andragogic teaching styles in clinical medical education and the acceptance of doubt as a driver of our learning. Conclusion: Doubt will continue to permeate our thoughts and actions no matter what. The moral or psychological distress that arises from this is the key motivating factor for our avoidance of tasks. If we accept this doubt and education embraces this doubt, it will no longer linger in the shadows as a negative and restrictive emotion but fuel a brighter dialogue and positive learning experience, ultimately assisting us in achieving our full potential.

Keywords: ethics, medical student, doubt, medical education, faith

Procedia PDF Downloads 107
638 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach

Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier

Abstract:

Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.

Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube

Procedia PDF Downloads 154
637 Characterization of Himalayan Phyllite with Reference to Foliation Planes

Authors: Divyanshoo Singh, Hemant Kumar Singh, Kumar Nilankar

Abstract:

Major engineering constructions and foundations (e.g., dams, tunnels, bridges, underground caverns, etc.) in and around the Himalayan region of Uttarakhand are not only confined within hard and crystalline rocks but also stretched within weak and anisotropic rocks. While constructing within such anisotropic rocks, engineers more often encounter geotechnical complications such as structural instability, slope failure, and excessive deformation. These severities/complexities arise mainly due to inherent anisotropy such as layering/foliations, preferred mineral orientations, and geo-mechanical anisotropy present within rocks and vary when measured in different directions. Of all the inherent anisotropy present within the rocks, major geotechnical complexities mainly arise due to the inappropriate orientation of weak planes (bedding/foliation). Thus, Orientations of such weak planes highly affect the fracture patterns, failure mechanism, and strength of rocks. This has led to an improved understanding of the physico-mechanical behavior of anisotropic rocks with different orientations of weak planes. Therefore, in this study, block samples of phyllite belonging to the Chandpur Group of Lesser Himalaya were collected from the Srinagar area of Uttarakhand, India, to investigate the effect of foliation angles on physico-mechanical properties of the rock. Further, collected block samples were core drilled of diameter 50 mm at different foliation angles, β (angle between foliation plane and drilling direction), i.e., 0⁰, 30⁰, 60⁰, and 90⁰, respectively. Before the test, drilled core samples were oven-dried at 110⁰C to achieve uniformity. Physical and mechanical properties such as Seismic wave velocity, density, uniaxial compressive strength (UCS), point load strength (PLS), and Brazilian tensile strength (BTS) test were carried out on prepared core specimens. The results indicate that seismic wave velocities (P-wave and S-wave) decrease with increasing β angle. As the β angle increases, the number of foliation planes that the wave needs to pass through increases and thus causes the dissipation of wave energy with increasing β. Maximum strength for UCS, PLS, and BTS was found to be at β angle of 90⁰. However, minimum strength for UCS and BTS was found to be at β angle of 30⁰, which differs from PLS, where minimum strength was found at 0⁰ β angle. Furthermore, failure modes also correspond to the strength of the rock, showing along foliation and non-central failure as characteristics of low strength values, while multiple fractures and central failure as characteristics of high strength values. Thus, this study will provide a better understanding of the anisotropic features of phyllite for the purpose of major engineering construction and foundations within the Himalayan Region.

Keywords: anisotropic rocks, foliation angle, Physico-mechanical properties, phyllite, Himalayan region

Procedia PDF Downloads 59
636 Using Fractal Architectures for Enhancing the Thermal-Fluid Transport

Authors: Surupa Shaw, Debjyoti Banerjee

Abstract:

Enhancing heat transfer in compact volumes is a challenge when constrained by cost issues, especially those associated with requirements for minimizing pumping power consumption. This is particularly acute for electronic chip cooling applications. Technological advancements in microelectronics have led to development of chip architectures that involve increased power consumption. As a consequence packaging, technologies are saddled with needs for higher rates of power dissipation in smaller form factors. The increasing circuit density, higher heat flux values for dissipation and the significant decrease in the size of the electronic devices are posing thermal management challenges that need to be addressed with a better design of the cooling system. Maximizing surface area for heat exchanging surfaces (e.g., extended surfaces or “fins”) can enable dissipation of higher levels of heat flux. Fractal structures have been shown to maximize surface area in compact volumes. Self-replicating structures at multiple length scales are called “Fractals” (i.e., objects with fractional dimensions; unlike regular geometric objects, such as spheres or cubes whose volumes and surface area values scale as integer values of the length scale dimensions). Fractal structures are expected to provide an appropriate technology solution to meet these challenges for enhanced heat transfer in the microelectronic devices by maximizing surface area available for heat exchanging fluids within compact volumes. In this study, the effect of different fractal micro-channel architectures and flow structures on the enhancement of transport phenomena in heat exchangers is explored by parametric variation of fractal dimension. This study proposes a model that would enable cost-effective solutions for thermal-fluid transport for energy applications. The objective of this study is to ascertain the sensitivity of various parameters (such as heat flux and pressure gradient as well as pumping power) to variation in fractal dimension. The role of the fractal parameters will be instrumental in establishing the most effective design for the optimum cooling of microelectronic devices. This can help establish the requirement of minimal pumping power for enhancement of heat transfer during cooling. Results obtained in this study show that the proposed models for fractal architectures of microchannels significantly enhanced heat transfer due to augmentation of surface area in the branching networks of varying length-scales.

Keywords: fractals, microelectronics, constructal theory, heat transfer enhancement, pumping power enhancement

Procedia PDF Downloads 318
635 Inter-Personal and Inter-Organizational Relationships in Supply Chain Integration: A Resource Orchestration Perspective

Authors: Bill Wang, Paul Childerhouse, Yuanfei Kang

Abstract:

Purpose: The research is to extend resource orchestration theory (ROT) into supply chain management (SCM) area to investigate the dyadic relationships at both individual and organizational levels in supply chain integration (SCI). Also, we try to explore the interaction mechanism between inter-personal relationships (IPRs) and inter-organizational (IORs) during the whole SCI process. Methodology/approach: The research employed an exploratory multiple case study approach of four New Zealand companies. The data was collected via semi-structured interviews with top, middle, and lower level managers and operators from different departments of both suppliers and customers triangulated with company archival data. Findings: The research highlights the important role of both IPRs and IORs in the whole SCI process. Both IPRs and IORs are valuable, inimitable resources but IORs are formal and exterior while IPRs are informal and subordinated. In the initial stage of SCI process, IPRs are seen as key resources antecedents to IOR building while three IPRs dimensions work differently: personal credibility acts as an icebreaker to strengthen the confidence forming IORs, and personal affection acts as a gatekeeper, whilst personal communication expedites the IORs process. In the maintenance and development stage, IORs and IPRs interact each other continuously: good interaction between IPRs and IORs can facilitate SCI process while the bad interaction between IPRs can damage the SCI process. On the other hand, during the life-cycle of SCI process, IPRs can facilitate the formation, development of IORs while IORs development can cultivate the ties of IPRs. Out of the three dimensions of IPRs, Personal communication plays a more important role to develop IORs than personal credibility and personal affection. Originality/value: This research contributes to ROT in supply chain management literature by highlighting the interaction of IPRs and IORs in SCI. The intangible resources and capabilities of three dimensions of IPRs need to be orchestrated and nurtured to achieve efficient and effective IORs in SCI. Also, IPRs and IORs need to be orchestrated in terms of breadth, depth, and life-cycle of whole SCI process. Our study provides further insight into the rarely explored inter-personal level of SCI. Managerial implications: Our research provides top management with further evidence of the significance roles of IPRs at different levels when working with trading partners. This highlights the need to actively manage and develop these soft IPRs skills as an intangible competitive resource. Further, the research identifies when staff with specific skills and connections should be utilized during the different stages of building and maintaining inter-organizational ties. More importantly, top management needs to orchestrate and balance the resources of IPRs and IORs.

Keywords: case study, inter-organizational relationships, inter-personal relationships, resource orchestration, supply chain integration

Procedia PDF Downloads 233
634 Investigating the Feasibility of Berry Production in Central Oregon under Protected and Unprotected Culture

Authors: Clare S. Sullivan

Abstract:

The high desert of central Oregon, USA is a challenging growing environment: short growing season (70-100 days); average annual precipitation of 280 mm; drastic swings in diurnal temperatures; possibility of frost any time of year; and sandy soils low in organic matter. Despite strong demand, there is almost no fruit grown in central Oregon due to potential yield loss caused by early and late frosts. Elsewhere in the USA, protected culture (i.e., high tunnels) has been used to extend fruit production seasons and improve yields. In central Oregon, high tunnels are used to grow multiple high-value vegetable crops, and farmers are unlikely to plant a perennial crop in a high tunnel unless proven profitable. In May 2019, two berry trials were established on a farm in Alfalfa, OR, to evaluate raspberry and strawberry yield, season length, and fruit quality in protected (high tunnels) vs. unprotected culture (open field). The main objective was to determine whether high tunnel berry production is a viable enterprise for the region. Each trial was arranged using a split-plot design. The main factor was the production system (high tunnel vs. open field), and the replicated, subplot factor was berry variety. Four day-neutral strawberry varieties and four primocane-bearing raspberry varieties were planted for the study and were managed using organic practices. Berries were harvested once a week early in the season, and twice a week as production increased. Harvested berries were separated into ‘marketable’ and ‘unmarketable’ in order to calculate percent cull. First-year results revealed berry yield and quality differences between varieties and production systems. Strawberry marketable yield and berry fruit size increased significantly in the high tunnel compared to the field; percent yield increase ranged from 7-46% by variety. Evie 2 was the highest yielding strawberry, although berry quality was lower than other berries. Raspberry marketable yield and berry fruit size tended to increase in the high tunnel compared to the field, although variety had a more significant effect. Joan J was the highest yielding raspberry and out-yielded the other varieties by 250% outdoor and 350% indoor. Overall, strawberry and raspberry yields tended to improve in high tunnels as compared to the field, but data from a second year will help determine whether high tunnel investment is worthwhile. It is expected that the production system will have more of an effect on berry yield and season length for second-year plants in 2020.

Keywords: berries, high tunnel, local food, organic

Procedia PDF Downloads 118
633 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 397
632 A Review of How COVID-19 Has Created an Insider Fraud Pandemic and How to Stop It

Authors: Claire Norman-Maillet

Abstract:

Insider fraud, including its various synonyms such as occupational, employee or internal fraud, is a major financial crime threat whereby an employee defrauds (or attempts to defraud) their current, prospective, or past employer. ‘Employee’ covers anyone employed by the company, including contractors, directors, and part time staff; they may be a solo bad actor or working in collusion with others, whether internal or external. Insider fraud is even more of a concern given the impacts of the Coronavirus pandemic, which has generated multiple opportunities to commit insider fraud. Insider fraud is something that is not necessarily thought of as a significant financial crime threat; the focus of most academics and practitioners has historically been on that of ‘external fraud’ against businesses or entities where an individual or group has no professional ties. Without the face-to-face, ‘over the shoulder’ capabilities of staff being able to keep an eye on their employees, there is a heightened reliance on trust and transparency. With this, naturally, comes an increased risk of insider fraud perpetration. The objective of the research is to better understand how companies are impacted by insider fraud, and therefore how to stop it. This research will make both an original contribution and stimulate debate within the financial crime field. The financial crime landscape is never static – criminals are always creating new ways to perpetrate financial crime, and new legislation and regulations are implemented as attempts to strengthen controls, in addition to businesses doing what they can internally to detect and prevent it. By focusing on insider fraud specifically, the research will be more specific and will be of greater use to those in the field. To achieve the aims of the research, semi-structured interviews were conducted with 22 individuals who either work in financial services and deal with insider fraud or work within insider fraud perpetration in a recruitment or advisory capacity. This was to enable the sourcing of information from a wide range of individuals in a setting where they were able to elaborate on their answers. The principal recruitment strategy was engaging with the researcher’s network on LinkedIn. The interviews were then transcribed and analysed thematically. Main findings in the research suggest that insider fraud has been ignored owing to the denial of accepting the possibility that colleagues would defraud their employer. Whilst Coronavirus has led to a significant rise in insider fraud, this type of crime has been a major risk to businesses since their inception, however have never been given the financial or strategic backing required to be mitigated, until it's too late. Furthermore, Coronavirus should have led to companies tightening their access rights, controls and policies to mitigate the insider fraud risk. However, in most cases this has not happened. The research concludes that insider fraud needs to be given a platform upon which to be recognised as a threat to any company and given the same level of weighting and attention by Executive Committees and Boards as other types of economic crime.

Keywords: fraud, insider fraud, economic crime, coronavirus, Covid-19

Procedia PDF Downloads 68
631 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 75
630 Freight Forwarders’ Liability: A Need for Revival of Unidroit Draft Convention after Six Decades

Authors: Mojtaba Eshraghi Arani

Abstract:

The freight forwarders, who are known as the Architect of Transportation, play a vital role in the supply chain management. The package of various services which they provide has made the legal nature of freight forwarders very controversial, so that they might be qualified once as principal or carrier and, on other occasions, as agent of the shipper as the case may be. They could even be involved in the transportation process as the agent of shipping line, which makes the situation much more complicated. The courts in all countries have long had trouble in distinguishing the “forwarder as agent” from “forwarder as principal” (as it is outstanding in the prominent case of “Vastfame Camera Ltd v Birkart Globistics Ltd And Others” 2005, Hong Kong). It is not fully known that in the case of a claim against the forwarder, what particular parameter would be used by the judge among multiple, and sometimes contradictory, tests for determining the scope of the forwarder liability. In particular, every country has its own legal parameters for qualifying the freight forwarders that is completely different from others, as it is the case in France in comparison with Germany and England. The unpredictability of the courts’ decisions in this regard has provided the freight forwarders with the opportunity to impose any limitation or exception of liability while pretending to play the role of a principal, consequently making the cargo interests incur ever-increasing damage. The transportation industry needs to remove such uncertainty by unifying national laws governing freight forwarders liability. A long time ago, in 1967, The International Institute for Unification of Private Law (UNIDROIT) prepared a draft convention called “Draft Convention on Contract of Agency for Forwarding Agents Relating to International Carriage of Goods” (hereinafter called “UNIDROIT draft convention”). The UNIDROIT draft convention provided a clear and certain framework for the liability of freight forwarder in each capacity as agent or carrier, but it failed to transform to a convention, and eventually, it was consigned to oblivion. Today, after nearly 6 decades from that era, the necessity of such convention can be felt apparently. However, one might reason that the same grounds, in particular, the resistance by forwarders’ association, FIATA, exist yet, and thus it is not logical to revive a forgotten draft convention after such long period of time. It is argued in this article that the main reason for resisting the UNIDROIT draft convention in the past was pending efforts for developing the “1980 United Nation Convention on International Multimodal Transport of Goods”. However, the latter convention failed to become in force on due time in a way that there was no new accession since 1996, as a result of which the UNIDROIT draft convention must be revived strongly and immediately submitted to the relevant diplomatic conference. A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and cases.

Keywords: freight forwarder, revival, agent, principal, uidroit, draft convention

Procedia PDF Downloads 74
629 Cloud Based Supply Chain Traceability

Authors: Kedar J. Mahadeshwar

Abstract:

Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.

Keywords: cloud, pharmaceutical, supply chain, tracking

Procedia PDF Downloads 527
628 Combination Therapies Targeting Apoptosis Pathways in Pediatric Acute Myeloid Leukemia (AML)

Authors: Ahlam Ali, Katrina Lappin, Jaine Blayney, Ken Mills

Abstract:

Leukaemia is the most frequently (30%) occurring type of paediatric cancer. Of these, approximately 80% are acute lymphoblastic leukaemia (ALL) with acute myeloid leukaemia (AML) cases making up the remaining 20% alongside other leukaemias. Unfortunately, children with AML do not have promising prognosis with only 60% surviving 5 years or longer. It has been highlighted recently the need for age-specific therapies for AML patients, with paediatric AML cases having a different mutational landscape compared with AML diagnosed in adult patients. Drug Repurposing is a recognized strategy in drug discovery and development where an already approved drug is used for diseases other than originally indicated. We aim to identify novel combination therapies with the promise of providing alternative more effective and less toxic induction therapy options. Our in-silico analysis highlighted ‘cell death and survival’ as an aberrant, potentially targetable pathway in paediatric AML patients. On this basis, 83 apoptotic inducing compounds were screened. A preliminary single agent screen was also performed to eliminate potentially toxic chemicals, then drugs were constructed into a pooled library with 10 drugs per well over 160 wells, with 45 possible pairs and 120 triples in each well. Seven cell lines were used during this study to represent the clonality of AML in paediatric patients (Kasumi-1, CMK, CMS, MV11-14, PL21, THP1, MOLM-13). Cytotoxicity was assessed up to 72 hours using CellTox™ Green reagent. Fluorescence readings were normalized to a DMSO control. Z-Score was assigned to each well based on the mean and standard deviation of all the data. Combinations with a Z-Score <2 were eliminated and the remaining wells were taken forward for further analysis. A well was considered ‘successful’ if each drug individually demonstrated a Z-Score <2, while the combination exhibited a Z-Score >2. Each of the ten compounds in one well (155) had minimal or no effect as single agents on cell viability however, a combination of two or more of the compounds resulted in a substantial increase in cell death, therefore the ten compounds were de-convoluted to identify a possible synergistic pair/triple combinations. The screen identified two possible ‘novel’ drug pairing, with BCL2 inhibitor ABT-737, combined with either a CDK inhibitor Purvalanol A, or AKT/ PI3K inhibitor LY294002. (ABT-737- 100 nM+ Purvalanol A- 1 µM) (ABT-737- 100 nM+ LY294002- 2 µM). Three possible triple combinations were identified (LY2409881+Akti-1/2+Purvalanol A, SU9516+Akti-1/2+Purvalanol A, and ABT-737+LY2409881+Purvalanol A), which will be taken forward for examining their efficacy at varying concentrations and dosing schedules, across multiple paediatric AML cell lines for optimisation of maximum synergy. We believe that our combination screening approach has potential for future use with a larger cohort of drugs including FDA approved compounds and patient material.

Keywords: AML, drug repurposing, ABT-737, apoptosis

Procedia PDF Downloads 203
627 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 238
626 The Effect of Group Counseling on the Victimhood Perceptions of Adolescent Who Are the Subject of Peer Victimization and on Their Coping Strategies

Authors: İsmail Seçer, Taştan Seçer

Abstract:

In this study, the effect of the group counseling on the victimhood perceptions of the primary school 7th and 8th grade students who are determined to be the subject of peer victimization and their dealing way with it was analyzed. The research model is Solomon Four Group Experimental Model. In this model, there are four groups that were determined with random sampling. Two of the groups have been used as experimental group and the other two have been used as control group. Solomon model is defined as real experimental model. In real experimental models, there are multiple groups consisting of subject which have similar characteristics, and selection of the subjects is done with random sampling. For this purpose, 230 students from Kültür Kurumu Primary School in Erzurum were asked to fill Adolescent Peer Victim Form. 100 students whose victim scores were higher and who were determined to be the subject of bullying were talked face to face and informed about the current study, and they were asked if they were willing to participate or not. As a result of these interviews, 60 students were determined to participate in the experimental study and four group consisting of 15 people were created with simple random sampling method. After the groups had been formed, experimental and control group were determined with casting lots. After determining experimental and control groups, an 11-session group counseling activity which was prepared by the researcher according to the literature was applied. The purpose of applying group counseling is to change the ineffective dealing ways with bullying and their victimhood perceptions. Each session was planned to be 75 minutes and applied as planned. In the control groups, counseling activities in the primary school counseling curricula was applied for 11 weeks. As a result of the study, physical, emotional and verbal victimhood perceptions of the participants in the experimental groups were decreased significantly compared to pre-experimental situations and to those in control group. Besides, it was determined that this change observed in the victimhood perceptions of the experimental group occurred independently from the effect of variables such as gender, age and academic success. The first evidence of the study related to the dealing ways is that the scores of the participants in the experimental group related to the ineffective dealing ways such as despair and avoidance is decreased significantly compared to the pre-experimental situation and to those in control group. The second evidence related to the dealing ways is that the scores of the participants in the experimental group related to effective dealing ways such as seeking for help, consulting social support, resistance and optimism is increased significantly compared to the pre-experimental situation and to those in control group. According to the evidence obtained through the study, it can be said that group counseling is an effective approach to change the victimhood perceptions of the individuals who are the subject of bullying and their dealing strategies with it.

Keywords: bullying, perception of victimization, coping strategies, ancova analysis

Procedia PDF Downloads 391
625 Cognition in Crisis: Unravelling the Link Between COVID-19 and Cognitive-Linguistic Impairments

Authors: Celine Davis

Abstract:

The novel coronavirus 2019 (COVID-19) is an infectious disease caused by the virus SARS-CoV-2, which has detrimental respiratory, cardiovascular, and neurological effects impacting over one million lives in the United States. New researches has emerged indicating long-term neurologic consequences in those who survive COVID-19 infections, including more than seven million Americans and another 27 million people worldwide. These consequences include attentional deficits, memory impairments, executive function deficits and aphasia-like symptoms which fall within the purview of speech-language pathology. The National Health Interview Survey (NHIS) is a comprehensive annual survey conducted by the National Center for Health Statistics (NCHS), a branch of the Centers for Disease Control and Prevention (CDC) in the United States. The NHIS is one of the most significant sources of health-related data in the country and has been conducted since 1957. The longitudinal nature of the study allows for analysis of trends in various variables over the years, which can be essential for understanding societal changes and making treatment recommendations. This current study will utilize NHIS data from 2020-2022 which contained interview questions specifically related to COVID-19. Adult cases of individuals between the ages of 18-50 diagnosed with COVID-19 in the United States during 2020-2022 will be identified using the National Health Interview Survey (NHIS). Multiple regression analysis of self-reported data confirming COVID-19 infection status and challenges with concentration, communication, and memory will be performed. Latent class analysis will be utilized to identify subgroups in the population to indicate whether certain demographic groups have higher susceptibility to cognitive-linguistic deficits associated with COVID-19. Completion of this study will reveal whether there is an association between confirmed COVID-19 diagnosis and heightened incidence of cognitive deficits and subsequent implications, if any, on activities of daily living. This study is distinct in its aim to utilize national survey data to explore the relationship between confirmed COVID-19 diagnosis and the prevalence of cognitive-communication deficits with a secondary focus on resulting activity limitations. To the best of the author’s knowledge, this will be the first large-scale epidemiological study investigating the associations between cognitive-linguistic deficits, COVID-19 and implications on activities of daily living in the United States population. These findings will highlight the need for targeted interventions and support services to address the cognitive-communication needs of individuals recovering from COVID-19, thereby enhancing their overall well-being and functional outcomes.

Keywords: cognition, COVID-19, language, limitations, memory, NHIS

Procedia PDF Downloads 53
624 Study of the Possibility of Adsorption of Heavy Metal Ions on the Surface of Engineered Nanoparticles

Authors: Antonina A. Shumakova, Sergey A. Khotimchenko

Abstract:

The relevance of research is associated, on the one hand, with an ever-increasing volume of production and the expansion of the scope of application of engineered nanomaterials (ENMs), and on the other hand, with the lack of sufficient scientific information on the nature of the interactions of nanoparticles (NPs) with components of biogenic and abiogenic origin. In particular, studying the effect of ENMs (TiO2 NPs, SiO2 NPs, Al2O3 NPs, fullerenol) on the toxicometric characteristics of common contaminants such as lead and cadmium is an important hygienic task, given the high probability of their joint presence in food products. Data were obtained characterizing a multidirectional change in the toxicity of model toxicants when they are co-administered with various types of ENMs. One explanation for this fact is the difference in the adsorption capacity of ENMs, which was further studied in in vitro studies. For this, a method was proposed based on in vitro modeling of conditions simulating the environment of the small intestine. It should be noted that the obtained data are in good agreement with the results of in vivo experiments: - with the combined administration of lead and TiO2 NPs, there were no significant changes in the accumulation of lead in rat liver; in other organs (kidneys, spleen, testes and brain), the lead content was lower than in animals of the control group; - studying the combined effect of lead and Al2O3 NPs, a multiple and significant increase in the accumulation of lead in rat liver was observed with an increase in the dose of Al2O3 NPs. For other organs, the introduction of various doses of Al2O3 NPs did not significantly affect the bioaccumulation of lead; - with the combined administration of lead and SiO2 NPs in different doses, there was no increase in lead accumulation in all studied organs. Based on the data obtained, it can be assumed that at least three scenarios of the combined effects of ENMs and chemical contaminants on the body: - ENMs quite firmly bind contaminants in the gastrointestinal tract and such a complex becomes inaccessible (or inaccessible) for absorption; in this case, it can be expected that the toxicity of both ENMs and contaminants will decrease; - the complex formed in the gastrointestinal tract has partial solubility and can penetrate biological membranes and / or physiological barriers of the body; in this case, ENMs can play the role of a kind of conductor for contaminants and, thus, their penetration into the internal environment of the body increases, thereby increasing the toxicity of contaminants; - ENMs and contaminants do not interact with each other in any way, therefore the toxicity of each of them is determined only by its quantity and does not depend on the quantity of another component. Authors hypothesized that the degree of adsorption of various elements on the surface of ENMs may be a unique characteristic of their action, allowing a more accurate understanding of the processes occurring in a living organism.

Keywords: absorption, cadmium, engineered nanomaterials, lead

Procedia PDF Downloads 87
623 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU

Authors: Ali Abdul Kadhim, Fue Lien

Abstract:

Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.

Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model

Procedia PDF Downloads 207