Search results for: analytical validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3562

Search results for: analytical validation

412 The Effects of English Contractions on the Application of Syntactic Theories

Authors: Wakkai Hosanna Hussaini

Abstract:

A formal structure of the English clause is composed of at least two elements – subject and verb, in structural grammar and at least one element – predicate, in systemic (functional) and generative grammars. Each of the elements can be represented by a word or group (of words). In modern English structure, very often speakers merge two words as one with the use of an apostrophe. Each of the two words can come from different elements or belong to the same element. In either case, result of the merger is called contraction. Although contractions constitute a part of modern English structure, they are considered informal in nature (more frequently used in spoken than written English) that is why they were initially viewed as constituting an evidence of language deterioration. To our knowledge, no formal syntactic theory yet has been particular on the contractions because of its deviation from the formal rules of syntax that seek to identify the elements that form a clause in English. The inconsistency between the formal rules and a contraction is established when two words representing two elements in a non-contraction are merged as one element to form a contraction. Thus the paper presents the various syntactic issues as effects arising from converting non-contracted to contracted forms. It categorizes English contractions and describes each category according to its syntactic relations (position and relationship) and morphological formation (form and content) as integral part of modern structure of English. This is a position paper as such the methodology is observational, descriptive and explanatory/analytical based on existing related literature. The inventory of English contractions contained in books on syntax forms the data from where specific examples are drawn. It is noted as conclusion that the existing syntactic theories were not originally established to account for English contractions. The paper, when published, will further expose the inadequacies of the existing syntactic theories by giving more reasons for the establishment of a more comprehensive syntactic theory for analyzing English clause/sentence structure involving contractions. The method used reveals the extent of the inadequacies in applying the three major syntactic theories: structural, systemic (functional) and generative, on the English contractions. Although no theory is without scope, shying away from the three major theories from recognizing the English contractions need to be broken because of the increasing popularity of its use in modern English structure. The paper, therefore, recommends that as use of contraction gains more popular even in formal speeches today, there is need to establish a syntactic theory to handle its patterns of syntactic relations and morphological formation.

Keywords: application, effects, English contractions, syntactic theories

Procedia PDF Downloads 247
411 Analysis of Advancements in Process Modeling and Reengineering at Fars Regional Electric Company, Iran

Authors: Mohammad Arabi

Abstract:

Business Process Reengineering (BPR) is a systematic approach to fundamentally redesign organizational processes to achieve significant improvements in organizational performance. At Fars Regional Electric Company, implementing BPR is deemed essential to increase productivity, reduce costs, and improve service quality. This article examines how BPR can help enhance the performance of Fars Regional Electric Company. The objective of this research is to evaluate and analyze the advancements in process modeling and reengineering at Fars Regional Electric Company and to provide solutions for improving the productivity and efficiency of organizational processes. This study aims to demonstrate how BPR can be used to improve organizational processes and enhance the overall performance of the company. This research employs both qualitative and quantitative research methods and includes interviews with senior managers and experts at Fars Regional Electric Company. The analytical tools include process modeling software such as Bizagi and ARIS, and statistical analysis software such as SPSS and Minitab. Data analysis was conducted using advanced statistical methods. The results indicate that the use of BPR techniques can lead to a significant reduction in process execution time and overall improvement in quality. Implementing BPR at Fars Regional Electric Company has led to increased productivity, reduced costs, and improved overall performance of the company. This study shows that with proper implementation of BPR and the use of modeling tools, the company can achieve significant improvements in its processes. Recommendations: (1) Continuous Training for Staff: Invest in continuous training of staff to enhance their skills and knowledge in BPR. (2) Use of Advanced Technologies: Utilize modeling and analysis software to improve processes. (3) Implementation of Effective Management Systems: Employ knowledge and information management systems to enhance organizational performance. (4) Continuous Monitoring and Review of Processes: Regularly review and revise processes to ensure ongoing improvements. This article highlights the importance of improving organizational processes at Fars Regional Electric Company and recommends that managers and decision-makers at the company seriously consider reengineering processes and utilizing modeling technologies to achieve developmental goals and continuous improvement.

Keywords: business process reengineering, electric company, Fars province, process modeling advancements

Procedia PDF Downloads 13
410 Characterizing Solid Glass in Bending, Torsion and Tension: High-Temperature Dynamic Mechanical Analysis up to 950 °C

Authors: Matthias Walluch, José Alberto Rodríguez, Christopher Giehl, Gunther Arnold, Daniela Ehgartner

Abstract:

Dynamic mechanical analysis (DMA) is a powerful method to characterize viscoelastic properties and phase transitions for a wide range of materials. It is often used to characterize polymers and their temperature-dependent behavior, including thermal transitions like the glass transition temperature Tg, via determination of storage and loss moduli in tension (Young’s modulus, E) and shear or torsion (shear modulus, G) or other testing modes. While production and application temperatures for polymers are often limited to several hundred degrees, material properties of glasses usually require characterization at temperatures exceeding 600 °C. This contribution highlights a high temperature setup for rotational and oscillatory rheometry as well as for DMA in different modes. The implemented standard convection oven enables the characterization of glass in different loading modes at temperatures up to 950 °C. Three-point bending, tension and torsional measurements on different glasses, with E and G moduli as a function of frequency and temperature, are presented. Additional tests include superimposing several frequencies in a single temperature sweep (“multiwave”). This type of test results in a considerable reduction of the experiment time and allows to evaluate structural changes of the material and their frequency dependence. Furthermore, DMA in torsion and tension was performed to determine the complex Poisson’s ratio as a function of frequency and temperature within a single test definition. Tests were performed in a frequency range from 0.1 to 10 Hz and temperatures up to the glass transition. While variations in the frequency did not reveal significant changes of the complex Poisson’s ratio of the glass, a monotonic increase of this parameter was observed when increasing the temperature. This contribution outlines the possibilities of DMA in bending, tension and torsion for an extended temperature range. It allows the precise mechanical characterization of material behavior from room temperature up to the glass transition and the softening temperature interval. Compared to other thermo-analytical methods, like Dynamic Scanning Calorimetry (DSC) where mechanical stress is neglected, the frequency-dependence links measurement results (e.g. relaxation times) to real applications

Keywords: dynamic mechanical analysis, oscillatory rheometry, Poisson's ratio, solid glass, viscoelasticity

Procedia PDF Downloads 64
409 A Small-Scale Survey on Risk Factors of Musculoskeletal Disorders in Workers of Logistics Companies in Cyprus and on the Early Adoption of Industrial Exoskeletons as Mitigation Measure

Authors: Kyriacos Clerides, Panagiotis Herodotou, Constantina Polycarpou, Evagoras Xydas

Abstract:

Background: Musculoskeletal disorders (MSDs) in the workplace is a very common problem in Europe which are caused by multiple risk factors. In recent years, wearable devices and exoskeletons for the workplace have been trying to address the various risk factors that are associated with strenuous tasks in the workplace. The logistics sector is a huge sector that includes warehousing, storage, and transportation. However, the task associated with logistics is not well-studied in terms of MSDs risk. This study was aimed at looking into the MSDs affecting workers of logistics companies. It compares the prevalence of MSDs among workers and evaluates multiple risk factors that contribute to the development of MSDs. Moreover, this study seeks to obtain user feedback on the adoption of exoskeletons in such a work environment. Materials and Methods: The study was conducted among workers in logistics companies in Nicosia, Cyprus, from July to September 2022. A set of standardized questionnaires was used for collecting different types of data. Results: A high proportion of logistics professionals reported MSDs in one or more other body regions, the lower back being the most commonly affected area. Working in the same position for long periods, working in awkward postures, and handling an excessive load, were found to be the most commonly reported job risk factor that contributed to the development of MSDs, in this study. A significant number of participants consider the back region as the most to be benefited from a wearable exoskeleton device. Half of the participants would like to have at least a 50% reduction in their daily effort. The most important characteristics for the adoption of exoskeleton devices were found to be how comfortable the device is and its weight. Conclusion: Lower back and posture were the highest risk factors among all logistics professionals assessed in this study. A larger scale study using quantitative analytical tools may give a more accurate estimate of MSDs, which would pave the way for making more precise recommendations to eliminate the risk factors and thereby prevent MSDs. A follow-up study using exoskeletons in the workplace should be done to assess whether they assist in MSD prevention.

Keywords: musculoskeletal disorders, occupational health, safety, occupational risk, logistic companies, workers, Cyprus, industrial exoskeletons, wearable devices

Procedia PDF Downloads 87
408 Recycling Waste Product for Metal Removal from Water

Authors: Saidur R. Chowdhury, Mamme K. Addai, Ernest K. Yanful

Abstract:

The research was performed to assess the potential of nickel smelter slag, an industrial waste, as an adsorbent in the removal of metals from aqueous solution. An investigation was carried out for Arsenic (As), Copper (Cu), lead (Pb) and Cadmium (Cd) adsorption from aqueous solution. Smelter slag was obtain from Ni ore at the Vale Inco Ni smelter in Sudbury, Ontario, Canada. The batch experimental studies were conducted to evaluate the removal efficiencies of smelter slag. The slag was characterized by surface analytical techniques. The slag contained different iron oxides and iron silicate bearing compounds. In this study, the effect of pH, contact time, particle size, competition by other ions, slag dose and distribution coefficient were evaluated to measure the optimum adsorption conditions of the slag as an adsorbent for As, Cu, Pb and Cd. The results showed 95-99% removal of As, Cu, Pb, and almost 50-60% removal of Cd, while batch experimental studies were conducted at 5-10 mg/L of initial concentration of metals, 10 g/L of slag doses, 10 hours of contact time and 170 rpm of shaking speed and 25oC condition. The maximum removal of Arsenic (As), Copper (Cu), lead (Pb) was achieved at pH 5 while the maximum removal of Cd was found after pH 7. The column experiment was also conducted to evaluate adsorption depth and service time for metal removal. This study also determined adsorption capacity, adsorption rate and mass transfer rate. The maximum adsorption capacity was found to be 3.84 mg/g for As, 4 mg/g for Pb, and 3.86 mg/g for Cu. The adsorption capacity of nickel slag for the four test metals were in decreasing order of Pb > Cu > As > Cd. Modelling of experimental data with Visual MINTEQ revealed that saturation indices of < 0 were recorded in all cases suggesting that the metals at this pH were under- saturated and thus in their aqueous forms. This confirms the absence of precipitation in the removal of these metals at the pHs. The experimental results also showed that Fe and Ni leaching from the slag during the adsorption process was found to be very minimal, ranging from 0.01 to 0.022 mg/L indicating the potential adsorbent in the treatment industry. The study also revealed that waste product (Ni smelter slag) can be used about five times more before disposal in a landfill or as a stabilization material. It also highlighted the recycled slags as a potential reactive adsorbent in the field of remediation engineering. It also explored the benefits of using renewable waste products for the water treatment industry.

Keywords: adsorption, industrial waste, recycling, slag, treatment

Procedia PDF Downloads 130
407 Ethical Issues in AI: Analyzing the Gap Between Theory and Practice - A Case Study of AI and Robotics Researchers

Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet

Abstract:

New major ethical dilemmas are posed by artificial intelligence. This article identifies an existing gap between the ethical questions that AI/robotics researchers grapple with in their research practice and those identified by literature review. The objective is to understand which ethical dilemmas are identified or concern AI researchers in order to compare them with the existing literature. This will enable to conduct training and awareness initiatives for AI researchers, encouraging them to consider these questions during the development of AI. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focused on collaborative robotics over several months. Subsequently, semi-structured interviews were conducted with 16 members of the team. The entire process took place during the first semester of 2023. The observations were analyzed using an analytical framework, and the interviews were thematically analyzed using Nvivo software. While the literature identifies three primary ethical concerns regarding AI—transparency, bias, and responsibility—the results firstly demonstrate that AI researchers are primarily concerned with the publication and valorization of their work, with the initial ethical concerns revolving around this matter. Questions arise regarding the extent to which to "market" publications and the usefulness of some publications. Research ethics are a central consideration for these teams. Secondly, another result shows that the researchers studied adopt a consequentialist ethics (though not explicitly formulated as such). They ponder the consequences of their development in terms of safety (for humans in relation to Robots/AI), worker autonomy in relation to the robot, and the role of work in society (can robots take over jobs?). Lastly, results indicate that the ethical dilemmas highlighted in the literature (responsibility, transparency, bias) do not explicitly appear in AI/Robotics research. AI/robotics researchers raise specific and pragmatic ethical questions, primarily concerning publications initially and consequentialist considerations afterward. Results demonstrate that these concerns are distant from the existing literature. However, the dilemmas highlighted in the literature also deserve to be explicitly contemplated by researchers. This article proposes that the journals these researchers target should mandate ethical reflection for all presented works. Furthermore, results suggest offering awareness programs in the form of short educational sessions for researchers.

Keywords: ethics, artificial intelligence, research, robotics

Procedia PDF Downloads 62
406 Constructing Evaluation Indicators for the Supply of Urban-Friendly Shelters from the Perspective of the Needs of the Elderly People in Taiwan

Authors: Chuan-Ming Tung, Tzu-Chiao Yuan

Abstract:

This research aims to construct the supply indicators and weights of shelter space from a perspective of the needs of the elderly by virtue of literature review, a systematical compilation of related regulations, and the use of the Analytical Hierarchy Process method, the questionnaires regarding the indicators filled out by 16 experts and scholars. The researcher then used 3 schools and 2 activity centers in Banqiao District, New Taipei City, as study cases to evaluate the ‘friendliness’ degree/level for the supply of shelters meeting the needs of elderly people. The supply evaluation indicators of friendly shelters meeting the needs of the elderly include "Administrative Operations and Service Needs" and "Residence-related and Living Needs"; under the "Administrative Operations and Service Needs" are "Management Operations and Information Provision", "Shelter Space Preparedness and Logistics Support", "Medical Care and Social Support", and "Shelters and Medical Environment", a total of 17 assessment items in four indicators, while under the "Residence-related and Living Needs" are "Dietary Needs", "Sleep Needs", "Hygiene and Sanitation Needs", "Accessibility and Convenience Needs ", etc., a total of 18 assessment items in four indicators. The results show that "Residence-related and Living Needs" is the most important item in the main levels of the supply indicators of the needs for friendly shelters to elderly people (weigh value 0.5504), followed by "Administrative Operations and Service Needs" (0.4496). The order of importance of the supply indicators of friendly shelters for the needs of elderly people is as follows: "Hygiene and Sanitation Needs" (0.1721), "Dietary Needs" (0.1340), "Medical Care and Social Support" (0.1300), "Sleep Needs" (0.1277), "Accessibility and Convenience Needs" (0.1166), "Basic Environment of Shelters" (0.1145), "Shelter Space Preparedness and Logistics Support" (0.1115) and "Management Operations and Information Provision" (0.0936). In addition, it can be noticed from the results of the case evaluation that the provision of refuges and shelters, mainly from schools and activity centers, is extremely inadequate for the needs of the elderly. In a set of comprehensive comparisons and contrasts, the evaluation indicators of refuges and shelters that need to be improved are "Medical Care and Social Support", "Hygiene and Sanitation Needs", "Sleep Needs", "Dietary Needs", and "Shelter Space Preparedness and Logistics Support".

Keywords: needs of the elderly people, urban shelters, evaluation indicators/indices., taiwan

Procedia PDF Downloads 64
405 On the Right an Effective Administrative Justice in the Republic of Macedonia: Challenges and Problems

Authors: Arlinda Memetaj

Abstract:

A sound system of administrative justice represents a vital element of democratic governance. The proper control of public administration consists not only of a sound civil service framework and legislative oversight, but empowerment of the public and courts to hold public officials accountable for their decision-making through the application of fair administrative procedural rules and the use of appropriate administrative appeals processes and judicial review. The establishment of effective public administration, has been since 1990s among the most 'important and urgent' final strategic objectives of the Republic of Macedonia. To this aim the country has so far adopted a huge series of legislative and strategic documents related to any aspects of the administrative justice system. The latter is designed to strengthen the legal position of citizens, businesses, civic organizations, and other societal subjects. 'Changes and reforms' in this field have been thus the most frequent terms being used in the country for the last more than 20 years. Several years ago the County established Administrative Courts, while permanently amending the Law on the General Administrative procedure (LGAP). The new LGAP was adopted in 2015 and it introduced considerable innovations concerned. The most recent inputs in this regard includes the National Public Administration Reform Strategy 2017 – 2022, one of the key expected result of which includes both providing effective protection of the citizens` rights. In doing the aforesaid however there is still a series of interrelated shortcomings in this regard, such as (just to mention few) the complex appeal procedure, delays in enforcing court rulings, etc. Against the above background, the paper firstly describes the Macedonian institutional and legislative framework in the above field, and then illustrates the shortcomings therein. It finally claims that the current status quo situation may be overcome only if there is a proper implementation of the administrative courts decisions and far stricter international monitoring process thereof. A new approach and strong political commitment from the highest political leadership is thus absolutely needed to ensure the principles of transparency, accountability and merit in public administration. The main method used in this paper is the descriptive, analytical and comparative one due to the very character of the paper itself.

Keywords: administrative justice, administrative procedure, administrative courts/disputes, European Human Rights Court, human rights, monitoring, reform, benefit.

Procedia PDF Downloads 137
404 Obtainment of Systems with Efavirenz and Lamellar Double Hydroxide as an Alternative for Solubility Improvement of the Drug

Authors: Danilo A. F. Fontes, Magaly A. M.Lyra, Maria L. C. Moura, Leslie R. M. Ferraz, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim, Giovanna C. R. M. Schver, Ping I. Lee, Severino Alves-Júnior, José L. Soares-Sobrinho, Pedro J. Rolim-Neto

Abstract:

Efavirenz (EFV) is a first-choice drug in antiretroviral therapy with high efficacy in the treatment of infection by Human Immunodeficiency Virus, which causes Acquired Immune Deficiency Syndrome (AIDS). EFV has low solubility in water resulting in a decrease in the dissolution rate and, consequently, in its bioavailability. Among the technological alternatives to increase solubility, the Lamellar Double Hydroxides (LDH) have been applied in the development of systems with poorly water-soluble drugs. The use of analytical techniques such as X-Ray Diffraction (XRD), Infrared Spectroscopy (IR) and Differential Scanning Calorimetry (DSC) allowed the elucidation of drug interaction with the lamellar compounds. The objective of this work was to characterize and develop the binary systems with EFV and LDH in order to increase the solubility of the drug. The LDH-CaAl was synthesized by the method of co-precipitation from salt solutions of calcium nitrate and aluminum nitrate in basic medium. The systems EFV-LDH and their physical mixtures (PM) were obtained at different concentrations (5-60% of EFV) using the solvent technique described by Takahashi & Yamaguchi (1991). The characterization of the systems and the PM’s was performed by XRD techniques, IR, DSC and dissolution test under non-sink conditions. The results showed improvements in the solubility of EFV when associated with LDH, due to a possible change in its crystal structure and formation of an amorphous material. From the DSC results, one could see that the endothermic peak at 173°C, temperature that correspond to the melting process of EFZ in the crystal form, was present in the PM results. For the EFZ-LDH systems (with 5, 10 and 30% of drug loading), this peak was not observed. XRD profiles of the PM showed well-defined peaks for EFV. Analyzing the XRD patterns of the systems, it was found that the XRD profiles of all the systems showed complete attenuation of the characteristic peaks of the crystalline form of EFZ. The IR technique showed that, in the results of the PM, there was the appearance of one band and overlap of other bands, while the IR results of the systems with 5, 10 and 30% drug loading showed the disappearance of bands and a few others with reduced intensity. The dissolution test under non-sink conditions showed that systems with 5, 10 and 30% drug loading promoted a great increase in the solubility of EFV, but the system with 10% of drug loading was the only one that could keep substantial amount of drug in solution at different pHs.

Keywords: Efavirenz, Lamellar Double Hydroxides, Pharmaceutical Techonology, Solubility

Procedia PDF Downloads 562
403 From the Classroom to Digital Learning Environments: An Action Research on Pedagogical Practices in Higher Education

Authors: Marie Alexandre, Jean Bernatchez

Abstract:

This paper focuses on the complexity of the face-to-face-to-distance learning transition process. Our research action aims to support the process of transition from classroom to distance learning for teachers in higher education with regard to pedagogical practices that can meet the various needs of students using digital learning environments. In Quebec and elsewhere in the world, the advent of digital education is helping to transform teaching, which is significantly changing the role of teachers. While distance education implies a dissociation of teaching and learning to a variable degree in space and time, distance education (DE) is becoming more and increasingly becoming a preferred option for maintaining the delivery of certain programs and providing access to programs and to provide access to quality activities throughout Quebec. Given the impact of teaching practices on educational success, this paper reports on the results of three research objectives: 1) To document teachers' knowledge of teaching in distance education through the design, experimentation and production of a repertoire of the determinants of pedagogical practices in response to students' needs. 2) Explain, according to a gendered logic, the adequacy between the pedagogical practices implemented in distance learning and the response to the profiles and needs expressed by students using digital learning environments; 3) Produce a model of a support approach during the process of transition from classroom to distance learning at the college level. A mixed methodology, i.e., a quantitative component (questionnaire survey) and a qualitative component (explanatory interviews and living lab) was used in cycles that were part of an ongoing validation process. The intervention includes the establishment of a professional collaboration group, webinars training webinars for the participating teachers on the didactic issue of knowledge-teaching in FAD, the didactic use of technologies, and the differentiated socialization models of educational success in college education. All of the tools developed will be used by partners in the target environment as well as by all teacher educators, students in initial teacher training, practicing teachers, and the general public. The results show that access to training leading to qualifications and commitment to educational success reflects the existing links between the people in the educational community. The relational stakes of being present in distance education take on multiple configurations and different dimensions of learning testify to needs and realities that are sometimes distinct depending on the life cycle. This project will be of interest to partners in the targeted field as well as to all teacher trainers, students in initial teacher training, practicing college teachers, and to university professors. The entire educational community will benefit from digital resources in education. The scientific knowledge resulting from this action research will benefit researchers in the fields of pedagogy, didactics, teacher training and pedagogy in higher education in a digital context.

Keywords: action research, didactics, digital learning environment, distance learning, higher education, pedagogy technological, pedagogical content knowledge

Procedia PDF Downloads 68
402 Stakeholder-Driven Development of a One Health Platform to Prevent Non-Alimentary Zoonoses

Authors: A. F. G. Van Woezik, L. M. A. Braakman-Jansen, O. A. Kulyk, J. E. W. C. Van Gemert-Pijnen

Abstract:

Background: Zoonoses pose a serious threat to public health and economies worldwide, especially as antimicrobial resistance grows and newly emerging zoonoses can cause unpredictable outbreaks. In order to prevent and control emerging and re-emerging zoonoses, collaboration between veterinary, human health and public health domains is essential. In reality however, there is a lack of cooperation between these three disciplines and uncertainties exist about their tasks and responsibilities. The objective of this ongoing research project (ZonMw funded, 2014-2018) is to develop an online education and communication One Health platform, “eZoon”, for the general public and professionals working in veterinary, human health and public health domains to support the risk communication of non-alimentary zoonoses in the Netherlands. The main focus is on education and communication in times of outbreak as well as in daily non-outbreak situations. Methods: A participatory development approach was used in which stakeholders from veterinary, human health and public health domains participated. Key stakeholders were identified using business modeling techniques previously used for the design and implementation of antibiotic stewardship interventions and consisted of a literature scan, expert recommendations, and snowball sampling. We used a stakeholder salience approach to rank stakeholders according to their power, legitimacy, and urgency. Semi-structured interviews were conducted with stakeholders (N=20) from all three disciplines to identify current problems in risk communication and stakeholder values for the One Health platform. Interviews were transcribed verbatim and coded inductively by two researchers. Results: The following key values were identified (but were not limited to): (a) need for improved awareness of veterinary and human health of each other’s fields, (b) information exchange between veterinary and human health, in particularly at a regional level; (c) legal regulations need to match with daily practice; (d) professionals and general public need to be addressed separately using tailored language and information; (e) information needs to be of value to professionals (relevant, important, accurate, and have financial or other important consequences if ignored) in order to be picked up; and (f) need for accurate information from trustworthy, centrally organised sources to inform the general public. Conclusion: By applying a participatory development approach, we gained insights from multiple perspectives into the main problems of current risk communication strategies in the Netherlands and stakeholder values. Next, we will continue the iterative development of the One Health platform by presenting key values to stakeholders for validation and ranking, which will guide further development. We will develop a communication platform with a serious game in which professionals at the regional level will be trained in shared decision making in time-critical outbreak situations, a smart Question & Answer (Q&A) system for the general public tailored towards different user profiles, and social media to inform the general public adequately during outbreaks.

Keywords: ehealth, one health, risk communication, stakeholder, zoonosis

Procedia PDF Downloads 266
401 Configuring Resilience and Environmental Sustainability to Achieve Superior Performance under Differing Conditions of Transportation Disruptions

Authors: Henry Ataburo, Dominic Essuman, Emmanuel Kwabena Anin

Abstract:

Recent trends of catastrophic events, such as the Covid-19 pandemic, the Suez Canal blockage, the Russia-Ukraine conflict, the Israel-Hamas conflict, and the climate change crisis, continue to devastate supply chains and the broader society. Prior authors have advocated for a simultaneous pursuit of resilience and sustainability as crucial for navigating these challenges. Nevertheless, the relationship between resilience and sustainability is a rather complex one: resilience and sustainability are considered unrelated, substitutes, or complements. Scholars also suggest that different firms prioritize resilience and sustainability differently for varied strategic reasons. However, we know little about whether, how, and when these choices produce different typologies of firms to explain differences in financial and market performance outcomes. This research draws inferences from the systems configuration approach to organizational fit to contend that a taxonomy of firms may emerge based on how firms configure resilience and environmental sustainability. The study further examines the effects of these taxonomies on financial and market performance in differing transportation disruption conditions. Resilience is operationalized as a firm’s ability to adjust current operations, structure, knowledge, and resources in response to disruptions, whereas environmental sustainability is operationalized as the extent to which a firm deploys resources judiciously and keeps the ecological impact of its operations to the barest minimum. Using primary data from 199 firms in Ghana and cluster analysis as an analytical tool, the study identifies four clusters of firms based on how they prioritize resilience and sustainability: Cluster 1 - "strong, moderate resilience, high sustainability firms," Cluster 2 - "sigh resilience, high sustainability firms," Cluster 3 - "high resilience, strong, moderate sustainability firms," and Cluster 4 - "weak, moderate resilience, strong, moderate sustainability firms". In addition, ANOVA and regression analysis revealed the following findings: Only clusters 1 and 2 were significantly associated with both market and financial performance. Under high transportation disruption conditions, cluster 1 firms excel better in market performance, whereas cluster 2 firms excel better in financial performance. Conversely, under low transportation disruption conditions, cluster 1 firms excel better in financial performance, whereas cluster 2 firms excel better in market performance. The study provides theoretical and empirical evidence of how resilience and environmental sustainability can be configured to achieve specific performance objectives under different disruption conditions.

Keywords: resilience, environmental sustainability, developing economy, transportation disruption

Procedia PDF Downloads 49
400 Performance Evaluation of the CSAN Pronto Point-of-Care Whole Blood Analyzer for Regular Hematological Monitoring During Clozapine Treatment

Authors: Farzana Esmailkassam, Usakorn Kunanuvat, Zahraa Mohammed Ali

Abstract:

Objective: The key barrier in Clozapine treatment of treatment-resistant schizophrenia (TRS) includes frequent bloods draws to monitor neutropenia, the main drug side effect. WBC and ANC monitoring must occur throughout treatment. Accurate WBC and ANC counts are necessary for clinical decisions to halt, modify or continue clozapine treatment. The CSAN Pronto point-of-care (POC) analyzer generates white blood cells (WBC) and absolute neutrophils (ANC) through image analysis of capillary blood. POC monitoring offers significant advantages over central laboratory testing. This study evaluated the performance of the CSAN Pronto against the Beckman DxH900 Hematology laboratory analyzer. Methods: Forty venous samples (EDTA whole blood) with varying concentrations of WBC and ANC as established on the DxH900 analyzer were tested in duplicates on three CSAN Pronto analyzers. Additionally, both venous and capillary samples were concomitantly collected from 20 volunteers and assessed on the CSAN Pronto and the DxH900 analyzer. The analytical performance including precision using liquid quality controls (QCs) as well as patient samples near the medical decision points, and linearity using a mix of high and low patient samples to create five concentrations was also evaluated. Results: In the precision study for QCs and whole blood, WBC and ANC showed CV inside the limits established according to manufacturer and laboratory acceptability standards. WBC and ANC were found to be linear across the measurement range with a correlation of 0.99. WBC and ANC from all analyzers correlated well in venous samples on the DxH900 across the tested sample ranges with a correlation of > 0.95. Mean bias in ANC obtained on the CSAN pronto versus the DxH900 was 0.07× 109 cells/L (95% L.O.A -0.25 to 0.49) for concentrations <4.0 × 109 cells/L, which includes decision-making cut-offs for continuing clozapine treatment. Mean bias in WBC obtained on the CSAN pronto versus the DxH900 was 0.34× 109 cells/L (95% L.O.A -0.13 to 0.72) for concentrations <5.0 × 109 cells/L. The mean bias was higher (-11% for ANC, 5% for WBC) at higher concentrations. The correlations between capillary and venous samples showed more variability with mean bias of 0.20 × 109 cells/L for the ANC. Conclusions: The CSAN pronto showed acceptable performance in WBC and ANC measurements from venous and capillary samples and was approved for clinical use. This testing will facilitate treatment decisions and improve clozapine uptake and compliance.

Keywords: absolute neutrophil counts, clozapine, point of care, white blood cells

Procedia PDF Downloads 73
399 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia

Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger

Abstract:

Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.

Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia

Procedia PDF Downloads 61
398 BLS-2/BSL-3 Laboratory for Diagnosis of Pathogens on the Colombia-Ecuador Border Region: A Post-COVID Commitment to Public Health

Authors: Anderson Rocha-Buelvas, Jaqueline Mena Huertas, Edith Burbano Rosero, Arsenio Hidalgo Troya, Mauricio Casas Cruz

Abstract:

COVID-19 is a disruptive pandemic for the public health and economic system of whole countries, including Colombia. Nariño Department is the southwest of the country and draws attention to being on the border with Ecuador, constantly facing demographic transition affecting infections between countries. In Nariño, the early routine diagnosis of SARS-CoV-2, which can be handled at BSL-2, has affected the transmission dynamics of COVID-19. However, new emerging and re-emerging viruses with biological flexibility classified as a Risk Group 3 agent can take advantage of epidemiological opportunities, generating the need to increase clinical diagnosis, mainly in border regions between countries. The overall objective of this project was to assure the quality of the analytical process in the diagnosis of high biological risk pathogens in Nariño by building a laboratory that includes biosafety level (BSL)-2 and (BSL)-3 containment zones. The delimitation of zones was carried out according to the Verification Tool of the National Health Institute of Colombia and following the standard requirements for the competence of testing and calibration laboratories of the International Organization for Standardization. This is achieved by harmonization of methods and equipment for effective and durable diagnostics of the large-scale spread of highly pathogenic microorganisms, employing negative-pressure containment systems and UV Systems in accordance with a finely controlled electrical system and PCR systems as new diagnostic tools. That increases laboratory capacity. Protection in BSL-3 zones will separate the handling of potentially infectious aerosols within the laboratory from the community and the environment. It will also allow the handling and inactivation of samples with suspected pathogens and the extraction of molecular material from them, allowing research with pathogens with high risks, such as SARS-CoV-2, Influenza, and syncytial virus, and malaria, among others. The diagnosis of these pathogens will be articulated across the spectrum of basic, applied, and translational research that could receive about 60 daily samples. It is expected that this project will be articulated with the health policies of neighboring countries to increase research capacity.

Keywords: medical laboratory science, SARS-CoV-2, public health surveillance, Colombia

Procedia PDF Downloads 69
397 Reconstruction of Age-Related Generations of Siberian Larch to Quantify the Climatogenic Dynamics of Woody Vegetation Close the Upper Limit of Its Growth

Authors: A. P. Mikhailovich, V. V. Fomin, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Woody vegetation among the upper limit of its habitat is a sensitive indicator of biota reaction to regional climate changes. Quantitative assessment of temporal and spatial changes in the distribution of trees and plant biocenoses calls for the development of new modeling approaches based upon selected data from measurements on the ground level and ultra-resolution aerial photography. Statistical models were developed for the study area located in the Polar Urals. These models allow obtaining probabilistic estimates for placing Siberian Larch trees into one of the three age intervals, namely 1-10, 11-40 and over 40 years, based on the Weilbull distribution of the maximum horizontal crown projection. Authors developed the distribution map for larch trees with crown diameters exceeding twenty centimeters by deciphering aerial photographs made by a UAV from an altitude equal to fifty meters. The total number of larches was equal to 88608, forming the following distribution row across the abovementioned intervals: 16980, 51740, and 19889 trees. The results demonstrate that two processes can be observed in the course of recent decades: first is the intensive forestation of previously barren or lightly wooded fragments of the study area located within the patches of wood, woodlands, and sparse stand, and second, expansion into mountain tundra. The current expansion of the Siberian Larch in the region replaced the depopulation process that occurred in the course of the Little Ice Age from the late 13ᵗʰ to the end of the 20ᵗʰ century. Using data from field measurements of Siberian larch specimen biometric parameters (including height, diameter at root collar and at 1.3 meters, and maximum projection of the crown in two orthogonal directions) and data on tree ages obtained at nine circular test sites, authors developed a model for artificial neural network including two layers with three and two neurons, respectively. The model allows quantitative assessment of a specimen's age based on height and maximum crone projection values. Tree height and crown diameters can be quantitatively assessed using data from aerial photographs and lidar scans. The resulting model can be used to assess the age of all Siberian larch trees. The proposed approach, after validation, can be applied to assessing the age of other tree species growing near the upper tree boundaries in other mountainous regions. This research was collaboratively funded by the Russian Ministry for Science and Education (project No. FEUG-2023-0002) and Russian Science Foundation (project No. 24-24-00235) in the field of data modeling on the basis of artificial neural network.

Keywords: treeline, dynamic, climate, modeling

Procedia PDF Downloads 43
396 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 53
395 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem

Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly

Abstract:

We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.

Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard

Procedia PDF Downloads 506
394 Urban Compactness and Sustainability: Beijing Experience

Authors: Xilu Liu, Ameen Farooq

Abstract:

Beijing has several compact residential housing settings in many of its urban districts. The study in this paper reveals that urban compactness, as predictor of density, may carry an altogether different meaning in the developing world when compared to the U.S for achieving objectives of urban sustainability. Recent urban design studies in the U.S are debating for compact and mixed-use higher density housing to achieve sustainable and energy efficient living environments. While the concept of urban compactness is widely accepted as an approach in modern architectural and urban design fields, this belief may not directly carry well into all areas within cities of developing countries. Beijing’s technology-driven economy, with its historic and rich cultural heritage and a highly speculated real-estate market, extends its urban boundaries into multiple compact urban settings of varying scales and densities. The accelerated pace of migration from the countryside for better opportunities has led to unsustainable and uncontrolled buildups in order to meet the growing population demand within and outside of the urban center. This unwarranted compactness in certain urban zones has produced an unhealthy physical density with serious environmental and ecological challenging basic living conditions. In addition, crowding, traffic congestion, pollution and limited housing surrounding this compactness is a threat to public health. Several residential blocks in close proximity to each other were found quite compacted, or ill-planned, with residential sites due to lack of proper planning in Beijing. Most of them at first sight appear to be compact and dense but further analytical studies revealed that what appear to be dense actually are not as dense as to make a good case that could serve as the corner stone of sustainability and energy efficiency. This study considered several factors including floor area ratio (FAR), ground coverage (GSI), open space ratio (OSR) as indicators in analyzing urban compactness as a predictor of density. The findings suggest that these measures, influencing the density of residential sites under study, were much smaller in density than expected given their compact adjacencies. Further analysis revealed that several residential housing appear to support the notion of density in its compact layout but are actually compacted due to unregulated planning marred by lack of proper urban design standards, policies and guidelines specific to their urban context and condition.

Keywords: Beijing, density, sustainability, urban compactness

Procedia PDF Downloads 405
393 Relationship of Macro-Concepts in Educational Technologies

Authors: L. R. Valencia Pérez, A. Morita Alexander, Peña A. Juan Manuel, A. Lamadrid Álvarez

Abstract:

This research shows the reflection and identification of explanatory variables and their relationships between different variables that are involved with educational technology, all of them encompassed in macro-concepts which are: cognitive inequality, economy, food and language; These will give the guideline to have a more detailed knowledge of educational systems, the communication and equipment, the physical space and the teachers; All of them interacting with each other give rise to what is called educational technology management. These elements contribute to have a very specific knowledge of the equipment of communications, networks and computer equipment, systems and content repositories. This is intended to establish the importance of knowing a global environment in the transfer of knowledge in poor countries, so that it does not diminish the capacity to be authentic and preserve their cultures, their languages or dialects, their hierarchies and real needs; In short, to respect the customs of different towns, villages or cities that are intended to be reached through the use of internationally agreed professional educational technologies. The methodology used in this research is the analytical - descriptive, which allows to explain each of the variables, which in our opinion must be taken into account, in order to achieve an optimal incorporation of the educational technology in a model that gives results in a medium term. The idea is that in an encompassing way the concepts will be integrated to others with greater coverage until reaching macro concepts that are of national coverage in the countries and that are elements of conciliation in the different federal and international reforms. At the center of the model is the educational technology which is directly related to the concepts that are contained in factors such as the educational system, communication and equipment, spaces and teachers, which are globally immersed in macro concepts Cognitive inequality, economics, food and language. One of the major contributions of this article is to leave this idea under an algorithm that allows to be as unbiased as possible when evaluating this indicator, since other indicators that are to be taken from international preference entities like the OECD in the area of education systems studied, so that they are not influenced by particular political or interest pressures. This work opens the way for a relationship between involved entities, both conceptual, procedural and human activity, to clearly identify the convergence of their impact on the problem of education and how the relationship can contribute to an improvement, but also shows possibilities of being able to reach a comprehensive education reform for all.

Keywords: relationships macro-concepts, cognitive inequality, economics, alimentation and language

Procedia PDF Downloads 186
392 Investigation on Behaviour of Reinforced Concrete Beam-Column Joints Retrofitted with CFRP

Authors: Ehsan Mohseni

Abstract:

The aim of this thesis is to provide numerical analyses of reinforced concrete beams-column joints with/without CFRP (Carbon Fiber Reinforced Polymer) in order to achieve a better understanding of the behaviour of strengthened beamcolumn joints. A comprehensive literature survey prior to this study revealed that published studies are limited to a handful only; the results are inconclusive and some are even contradictory. Therefore in order to improve on this situation, following that review, a numerical study was designed and performed as presented in this thesis. For the numerical study, dimensions, end supports, and characteristics of the beam and column models were the same as those chosen in an experimental investigation performed previously where ten beamcolumn joint were tested tofailure. Finite element analysis is a useful tool in cases where analytical methods are not capable of solving the problem due to the complexities associated with the problem. The cyclic behaviour of FRP strengthened reinforced concrete beam-columns joints is such a case. Interaction of steel (longitudinal and stirrups), concrete and FRP, yielding of steel bars and stirrups, cracking of concrete, the redistribution of stresses as some elements unload due to crushing or yielding and the confinement of concrete due to the presence of FRP are some of the issues that introduce the complexities into the problem.Numerical solutions, however, can provide further in formation about the behaviour in lieu of the costly experiments or complex closed form solutions. This thesis presents the results of a numerical study on beam-column joints subjected to cyclic loads that are strengthened with CFRP wraps or strrips in a variety of configurations. The analyses are performed by Abaqus finite element program and are calibrated with the experiments. A range of issues in beam-column joints including the cracking load, the ultimate load, lateral load-displacement curves of joints, are investigated.The numerical results for different configurations of strengthening are compared. Finally, the computed numerical results are compared with those obtained from experiments. the cracking load, the ultimate load, lateral load-displacement curves obtained from numerical analysis for all joints were in very good agreement with the corresponding experimental ones.The results obtained from the numerical analysis in most cases implies that this method is conservative and therefore can be used in design applications with confidence.

Keywords: numerical analysis, strengthening, CFRP, reinforced concrete joints

Procedia PDF Downloads 329
391 Renewable Energy Storage Capacity Rating: A Forecast of Selected Load and Resource Scenario in Nigeria

Authors: Yakubu Adamu, Baba Alfa, Salahudeen Adamu Gene

Abstract:

As the drive towards clean, renewable and sustainable energy generation is gradually been reshaped by renewable penetration over time, energy storage has thus, become an optimal solution for utilities looking to reduce transmission and capacity cost, therefore the need for capacity resources to be adjusted accordingly such that renewable energy storage may have the opportunity to substitute for retiring conventional energy systems with higher capacity factors. Considering the Nigeria scenario, where Over 80% of the current Nigerian primary energy consumption is met by petroleum, electricity demand is set to more than double by mid-century, relative to 2025 levels. With renewable energy penetration rapidly increasing, in particular biomass, hydro power, solar and wind energy, it is expected to account for the largest share of power output in the coming decades. Despite this rapid growth, the imbalance between load and resources has created a hindrance to the development of energy storage capacity, load and resources, hence forecasting energy storage capacity will therefore play an important role in maintaining the balance between load and resources including supply and demand. Therefore, the degree to which this might occur, its timing and more importantly its sustainability, is the subject matter of the current research. Here, we forecast the future energy storage capacity rating and thus, evaluate the load and resource scenario in Nigeria. In doing so, We used the scenario-based International Energy Agency models, the projected energy demand and supply structure of the country through 2030 are presented and analysed. Overall, this shows that in high renewable (solar) penetration scenarios in Nigeria, energy storage with 4-6h duration can obtain over 86% capacity rating with storage comprising about 24% of peak load capacity. Therefore, the general takeaway from the current study is that most power systems currently used has the potential to support fairly large penetrations of 4-6 hour storage as capacity resources prior to a substantial reduction in capacity ratings. The data presented in this paper is a crucial eye-opener for relevant government agencies towards developing these energy resources in tackling the present energy crisis in Nigeria. However, if the transformation of the Nigeria. power system continues primarily through expansion of renewable generation, then longer duration energy storage will be needed to qualify as capacity resources. Hence, the analytical task from the current survey will help to determine whether and when long-duration storage becomes an integral component of the capacity mix that is expected in Nigeria by 2030.

Keywords: capacity, energy, power system, storage

Procedia PDF Downloads 19
390 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions

Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri

Abstract:

Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.

Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics

Procedia PDF Downloads 167
389 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach

Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh

Abstract:

Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.

Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling

Procedia PDF Downloads 25
388 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic

Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova

Abstract:

Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.

Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification

Procedia PDF Downloads 88
387 Development of an Instrument Assessing Participants’ Motivation on Assigning Monetary Value to Quality of Life

Authors: Afentoula Mavrodi, Andreas Georgiou, Georgios Tsiotras, Vassilis Aletras

Abstract:

Placing a monetary value on a quality-adjusted-life-year (QALY) is of utmost importance in economic evaluation. Identifying the population’s preferences is critical in order to understand some of the reasons driving variations in the assigned monetary value. Yet, evidence of the motives behind value assignment to a QALY by the general public is limited. Developing an instrument that would capture the population’s motives could be proven valuable to policy-makers, to guide them in allocating different values to a QALY based on users’ motivations. The aim of this study was to identify the most relevant motives and develop an appropriate instrument to assess them. To design the instrument, we employed: a) the EQ-5D-3L tool to assess participants’ current health status, and b) the Willingness-to-Pay (WTP) approach, within the Contingent Valuation (CV) Method framework, to elicit the monetary value. Advancing the open-ended approach adopted to assess solely protest bidders’ motives; a variety of follow-up item-specific statements were designed (deductive approach), aiming to evaluate motives of both protest bidders and participants willing to pay for the hypothetical treatment under consideration. The initial design of the survey instrument was the outcome of an extensive literature review. This instrument was revised based on 15 semi-structured interviews that took place in September 2018 and a pilot study held during two months (October-November) in 2018. Individuals with different educational, occupational and economical backgrounds and adequate verbal skills were recruited to complete the semi-structured interviews. The follow-up motivation statements of both protest bidders and those willing to pay were revised and rephrased after the semi-structured interviews. In total 4 statements for protest bidders and 3 statements for those willing to pay for the treatment were chosen to be included in the survey tool. Using the CATI (Computer Assisted Telephone Interview) method, a randomly selected sample of 97 persons living in Thessaloniki, Greece, completed the questionnaire on two occasions over a period of 4 weeks. Based on pilot study results, a test-retest reliability assessment was performed using the intra-class correlation coefficient (ICC). All statements formulated for protest bidders showed acceptable reliability (ICC values of 0.84 (95% CI: 0.67, 0.92) and above). Similarly, all statements for those willing to pay for the treatment showed high reliability (ICC values of 0.86 (95% CI: 0.78, 0.91) and above). Overall, the instrument designed in this study was reliable with regards to the item-specific statements assessing participants’ motivation. Validation of the instrument will take place in a future study. For a holistic WTP per QALY instrument, participants’ motivation must be addressed broadly. The instrument developed in this study captured a variety of motives and provided insight with regards to the method through which the latter are evaluated. Last but not least, it extended motive assessment to all study participants and not only protest bidders.

Keywords: contingent valuation method, instrument, motives, quality-adjusted life-year, willingness-to-pay

Procedia PDF Downloads 123
386 The Survey of Relationship between Health Literacy and Knowledge of Heart Failure with Rehospitalization in Patients with Heart Failure Admitted to Heart Failure Clinic

Authors: Jaleh Mohammad Aliha, Rezvan Razazi, Nasim Naderi

Abstract:

Introduction: Despite the progress in new effective drugs in the treatment of heart failure, the disease still accompanied with frequent hospitalization, impaired quality of life, early mortality and significant economic burden. Patients with chronic disease and consequently patients with heart failure need the knowledge and optimal health literacy to improve the quality of life and minimize the rate of rehopitalizatio. So, considering to importance of knowledge and health literacy in this patients as well as contradictory literature, this study conducted to investigate the relationship between health literacy and Knowledge of heart failure with rehospitalization in patients with heart failure admitted to heart failure clinic in Rajai Heart center in 1394. Methods: The cross-sectional method with convenience sampling method was used in this study. After obtaining the necessary permissions from the ethics committee and the Shahid Rajai Heart center, 238 patients who were older than 18 years and had ejection fraction 35% or less with the ability to read and write and lack of psychiatric, neurological and cognitive disorders and signed the informed consent were recruited. Data collection were perfomed through demographic data questionnaire, short standard health literacy questionnaire 'Short-TOFHLA-16' and Vanderwall (2005) knowledge of heart failure questionnaire. Reliability was assessed by internal consistency method and Cronbach's alpha for both questionnaires was more than 0.7. Then data were analysed by SPSS-20 with descriptive statistic and analytical statistic such as T-test, Chi-square and ANOVA. Results: The majority of patients were male (66%), married (80%) and had age between 50 to 70 years old (42%). The majority of studied men and women have good health literacy and About half of them have adequate knowledge about heart failure. Fisher's exact test showed that there was a significant statistical correlation between health literacy and knowlegh about heart failure. In other words, higher health literacy associated with more knowledge about their condition. Also findings showed that there was no significant statistical correlation between health literacy and knowledge about heart failure and frequency of CCU and emergency admissions. Conclusion: The study results showed that the higher health literacy, associated with the greater knowledge about heart failure and patients' perception about caring recommendations and disease outcomes. Therefore, the knowledge about heart failure and factors which related to severity of the disease, is the important issue to problem identification and treatment and reduction of rehospitalization.

Keywords: health literacy, heart failure, knowlegde, rehospitalization

Procedia PDF Downloads 383
385 Status Quo Bias: A Paradigm Shift in Policy Making

Authors: Divyansh Goel, Varun Jain

Abstract:

Classical economics works on the principle that people are rational and analytical in their decision making and their choices fall in line with the most suitable option according to the dominant strategy in a standard game theory model. This model has failed at many occasions in estimating the behavior and dealings of rational people, giving proof of some other underlying heuristics and cognitive biases at work. This paper probes into the study of these factors, which fall under the umbrella of behavioral economics and through their medium explore the solution to a problem which a lot of nations presently face. There has long been a wide disparity in the number of people holding favorable views on organ donation and the actual number of people signing up for the same. This paper, in its entirety, is an attempt to shape the public policy which leads to an increase the number of organ donations that take place and close the gap in the statistics of the people who believe in signing up for organ donation and the ones who actually do. The key assumption here is that in cases of cognitive dissonance, where people have an inconsistency due to conflicting views, people have a tendency to go with the default choice. This tendency is a well-documented cognitive bias known as the status quo bias. The research in this project involves an assay of mandated choice models of organ donation with two case studies. The first of an opt-in system of Germany (where people have to explicitly sign up for organ donation) and the second of an opt-out system of Austria (every citizen at the time of their birth is an organ donor and has to explicitly sign up for refusal). Additionally, there has also been presented a detailed analysis of the experiment performed by Eric J. Johnson and Daniel G. Goldstein. Their research as well as many other independent experiments such as that by Tsvetelina Yordanova of the University of Sofia, both of which yield similar results. The conclusion being that the general population has by and large no rigid stand on organ donation and are gullible to status quo bias, which in turn can determine whether a large majority of people will consent to organ donation or not. Thus, in our paper, we throw light on how governments can use status quo bias to drive positive social change by making policies in which everyone by default is marked an organ donor, which will, in turn, save the lives of people who succumb on organ transplantation waitlists and save the economy countless hours of economic productivity.

Keywords: behavioral economics, game theory, organ donation, status quo bias

Procedia PDF Downloads 287
384 Learning to Teach in Large Classrooms: Training Faculty Members from Milano Bicocca University, from Didactic Transposition to Communication Skills

Authors: E. Nigris, F. Passalacqua

Abstract:

Relating to the recent researches in the field of faculty development, this paper aims to present a pilot training programme realized at the University of Milano-Bicocca to improve teaching skills of faculty members. A total of 57 professors (both full professors and associate professors) were trained during the pilot programme in three editions of the workshop, focused on promoting skills for teaching large classes. The study takes into account: 1) the theoretical framework of the programme which combines the recent tradition about professional development and the research on in-service training of school teachers; 2) the structure and the content of the training programme, organized in a 12 hours-full immersion workshop and in individual consultations; 3) the educational specificity of the training programme which is based on the relation between 'general didactic' (active learning metholodies; didactic communication) and 'disciplinary didactics' (didactic transposition and reconstruction); 4) results about the impact of the training programme, both related to the workshop and the individual consultations. This study aims to provide insights mainly on two levels of the training program’s impact ('behaviour change' and 'transfer') and for this reason learning outcomes are evaluated by different instruments: a questionnaire filled out by all 57 participants; 12 in-depth interviews; 3 focus groups; conversation transcriptions of workshop activities. Data analysis is based on a descriptive qualitative approach and it is conducted through thematic analysis of the transcripts using analytical categories derived principally from the didactic transposition theory. The results show that the training programme developed effectively three major skills regarding different stages of the 'didactic transposition' process: a) the content selection; a more accurated selection and reduction of the 'scholarly knowledge', conforming to the first stage of the didactic transposition process; b) the consideration of students’ prior knowledge and misconceptions within the lesson design, in order to connect effectively the 'scholarly knowledge' to the 'knowledge to be taught' (second stage of the didactic transposition process); c) the way of asking questions and managing discussion in large classrooms, in line with the transformation of the 'knowledge to be taught' in 'taught knowledge' (third stage of the didactic transposition process).

Keywords: didactic communication, didactic transposition, instructional development, teaching large classroom

Procedia PDF Downloads 125
383 Geotechnical Education in the USA: A Comparative Analysis of Academic Schooling vs. Industry Needs in the Area of Earth Retaining Structures

Authors: Anne Lemnitzer, Eric Tavarez

Abstract:

The academic rigor of the geotechnical engineering curriculum indicates strong institutional and geographical variations. Geotechnical engineering deals with the most challenging civil engineering material, as opposed to structural engineering, environmental studies, transportation engineering, and water resources. Yet, technical expectations posed by the practicing professional community do not necessarily consider the challenges inherent to the disparity in academic rigor and disciplinary differences. To recognize the skill shortages among current graduates as well as identify opportunities to better equip graduate students in specific fields of geotechnical engineering, a two-part survey was developed in collaboration with the Earth Retaining Structures (ERS) Committee of the American Society of Civil Engineers. Earth Retaining Structures are critical components of infrastructure systems and integral components to many major engineering projects. Within the geotechnical curriculum, Earth Retaining Structures is either taught as a separate course or major subject within a foundation design class. Part 1 of the survey investigated the breadth and depth of the curriculum with respect to ERS by requesting faculty across the United States to provide data on their curricular content, integration of practice-oriented course content, student preparation for professional licensing, and level of technical competency expected upon student graduation. Part 2 of the survey enables a comparison of training provided versus training needed. This second survey addressed practicing geotechnical engineers in all sectors of the profession (e.g., private engineering consulting, governmental agencies, contractors, suppliers/manufacturers) and collected data on the expectations with respect to technical and non-technical skills of engineering graduates entering the professional workforce. Results identified skill shortages in soft skills, critical thinking, analytical and language skills, familiarity with design codes and standards, and communication with various stakeholders. The data will be used to develop educational tools to advance the proficiency and expertise of geotechnical engineering students to meet and exceed the expectations of the profession and to stimulate a lifelong interest in advancing the field of geotechnical engineering.

Keywords: geotechnical engineering, academic training, industry requirements, earth retaining structures

Procedia PDF Downloads 112