Search results for: lexical complexity
392 One or More Building Information Modeling Managers in France: The Confusion of the Kind
Authors: S. Blanchard, D. Beladjine, K. Beddiar
Abstract:
Since 2015, the arrival of BIM in the building sector in France has turned the corporation world upside down. Not only constructive practices have been impacted, but also the uses and the men who have undergone important changes. Thus, the new collaborative mode generated by the BIM and the digital model has challenged the supremacy of some construction actors because the process involves working together taking into account the needs of other contributors. New BIM tools have emerged and actors in the act of building must take ownership of them. It is in this context that under the impetus of a European directive and the French government's encouragement of new missions and job profiles have. Moreover, concurrent engineering requires that each actor can advance at the same time as the others, at the whim of the information that reaches him, and the information he has to transmit. However, in the French legal system around public procurement, things are not planned in this direction. Also, a consequent evolution must take place to adapt to the methodology. The new missions generated by the BIM in France require a good mastery of the tools and the process. Also, to meet the objectives of the BIM approach, it is possible to define a typical job profile around the BIM, adapted to the various sectors concerned. The multitude of job offers using the same terms with very different objectives and the complexity of the proposed missions motivated by our approach. In order to reinforce exchanges with professionals or specialists, we carried out a statistical study to answer this problem. Five topics are discussed around the business area: the BIM in the company, the function (business), software used and BIM missions practiced (39 items). About 1400 professionals were interviewed. These people work in companies (micro businesses, SMEs, and Groups) of construction, engineering offices or, architectural agencies. 77% of respondents have the status of employees. All participants are graduated in their trade, the majority having level 1. Most people have less than a year of experience in BIM, but some have 10 years. The results of our survey help to understand why it is not possible to define a single type of BIM Manager. Indeed, the specificities of the companies are so numerous and complex and the missions so varied, that there is not a single model for a function. On the other hand, it was possible to define 3 main professions around the BIM (Manager, Coordinator and Modeler) and 3 main missions for the BIM Manager (deployment of the method, assistance to project management and management of a project).Keywords: BIM manager, BIM modeler, BIM coordinator, project management
Procedia PDF Downloads 163391 Incident Management System: An Essential Tool for Oil Spill Response
Authors: Ali Heyder Alatas, D. Xin, L. Nai Ming
Abstract:
An oil spill emergency can vary in size and complexity, subject to factors such as volume and characteristics of spilled oil, incident location, impacted sensitivities and resources required. A major incident typically involves numerous stakeholders; these include the responsible party, response organisations, government authorities across multiple jurisdictions, local communities, and a spectrum of technical experts. An incident management team will encounter numerous challenges. Factors such as limited access to location, adverse weather, poor communication, and lack of pre-identified resources can impede a response; delays caused by an inefficient response can exacerbate impacts caused to the wider environment, socio-economic and cultural resources. It is essential that all parties work based on defined roles, responsibilities and authority, and ensure the availability of sufficient resources. To promote steadfast coordination and overcome the challenges highlighted, an Incident Management System (IMS) offers an essential tool for oil spill response. It provides clarity in command and control, improves communication and coordination, facilitates the cooperation between stakeholders, and integrates resources committed. Following the preceding discussion, a comprehensive review of existing literature serves to illustrate the application of IMS in oil spill response to overcome common challenges faced in a major-scaled incident. With a primary audience comprising practitioners in mind, this study will discuss key principles of incident management which enables an effective response, along with pitfalls and challenges, particularly, the tension between government and industry; case studies will be used to frame learning and issues consolidated from previous research, and provide the context to link practice with theory. It will also feature the industry approach to incident management which was further crystallized as part of a review by the Joint Industry Project (JIP) established in the wake of the Macondo well control incident. The authors posit that a common IMS which can be adopted across the industry not only enhances response capacity towards a major oil spill incident but is essential to the global preparedness effort.Keywords: command and control, incident management system, oil spill response, response organisation
Procedia PDF Downloads 156390 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam
Authors: Mahtab Makaremi Masouleh, Günter Wozniak
Abstract:
This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam
Procedia PDF Downloads 389389 LTE Modelling of a DC Arc Ignition on Cold Electrodes
Authors: O. Ojeda Mena, Y. Cressault, P. Teulet, J. P. Gonnet, D. F. N. Santos, MD. Cunha, M. S. Benilov
Abstract:
The assumption of plasma in local thermal equilibrium (LTE) is commonly used to perform electric arc simulations for industrial applications. This assumption allows to model the arc using a set of magneto-hydromagnetic equations that can be solved with a computational fluid dynamic code. However, the LTE description is only valid in the arc column, whereas in the regions close to the electrodes the plasma deviates from the LTE state. The importance of these near-electrode regions is non-trivial since they define the energy and current transfer between the arc and the electrodes. Therefore, any accurate modelling of the arc must include a good description of the arc-electrode phenomena. Due to the modelling complexity and computational cost of solving the near-electrode layers, a simplified description of the arc-electrode interaction was developed in a previous work to study a steady high-pressure arc discharge, where the near-electrode regions are introduced at the interface between arc and electrode as boundary conditions. The present work proposes a similar approach to simulate the arc ignition in a free-burning arc configuration following an LTE description of the plasma. To obtain the transient evolution of the arc characteristics, appropriate boundary conditions for both the near-cathode and the near-anode regions are used based on recent publications. The arc-cathode interaction is modeled using a non-linear surface heating approach considering the secondary electron emission. On the other hand, the interaction between the arc and the anode is taken into account by means of the heating voltage approach. From the numerical modelling, three main stages can be identified during the arc ignition. Initially, a glow discharge is observed, where the cold non-thermionic cathode is uniformly heated at its surface and the near-cathode voltage drop is in the order of a few hundred volts. Next, a spot with high temperature is formed at the cathode tip followed by a sudden decrease of the near-cathode voltage drop, marking the glow-to-arc discharge transition. During this stage, the LTE plasma also presents an important increase of the temperature in the region adjacent to the hot spot. Finally, the near-cathode voltage drop stabilizes at a few volts and both the electrode and plasma temperatures reach the steady solution. The results after some seconds are similar to those presented for thermionic cathodes.Keywords: arc-electrode interaction, thermal plasmas, electric arc simulation, cold electrodes
Procedia PDF Downloads 122388 Modelling for Roof Failure Analysis in an Underground Cave
Authors: M. Belén Prendes-Gero, Celestino González-Nicieza, M. Inmaculada Alvarez-Fernández
Abstract:
Roof collapse is one of the problems with a higher frequency in most of the mines of all countries, even now. There are many reasons that may cause the roof to collapse, namely the mine stress activities in the mining process, the lack of vigilance and carelessness or the complexity of the geological structure and irregular operations. This work is the result of the analysis of one accident produced in the “Mary” coal exploitation located in northern Spain. In this accident, the roof of a crossroad of excavated galleries to exploit the “Morena” Layer, 700 m deep, collapsed. In the paper, the work done by the forensic team to determine the causes of the incident, its conclusions and recommendations are collected. Initially, the available documentation (geology, geotechnics, mining, etc.) and accident area were reviewed. After that, laboratory and on-site tests were carried out to characterize the behaviour of the rock materials and the support used (metal frames and shotcrete). With this information, different hypotheses of failure were simulated to find the one that best fits reality. For this work, the software of finite differences in three dimensions, FLAC 3D, was employed. The results of the study confirmed that the detachment was originated as a consequence of one sliding in the layer wall, due to the large roof span present in the place of the accident, and probably triggered as a consequence of the existence of a protection pillar insufficient. The results allowed to establish some corrective measures avoiding future risks. For example, the dimensions of the protection zones that must be remained unexploited and their interaction with the crossing areas between galleries, or the use of more adequate supports for these conditions, in which the significant deformations may discourage the use of rigid supports such as shotcrete. At last, a grid of seismic control was proposed as a predictive system. Its efficiency was tested along the investigation period employing three control equipment that detected new incidents (although smaller) in other similar areas of the mine. These new incidents show that the use of explosives produces vibrations which are a new risk factor to analyse in a next future.Keywords: forensic analysis, hypothesis modelling, roof failure, seismic monitoring
Procedia PDF Downloads 115387 Identification of Bioactive Substances of Opuntia ficus-indica By-Products
Authors: N. Chougui, R. Larbat
Abstract:
The first economic importance of Opuntia ficus-indica relies on the production of edible fruits. This food transformation generates a large amount of by-products (seeds and peels) in addition to cladodes produced by the plant. Several studies showed the richness of these products with bioactive substances like phenolics that have potential applications. Indeed, phenolics have been associated with protection against oxidation and several biological activities responsible of different pathologies. Consequently, there has been a growing interest in identifying natural antioxidants from plants. This study falls within the framework of the industrial exploitation of by-products of the plant. The study aims to investigate the metabolic profile of three by-products (cladodes, peel seeds) regarding total phenolic content by liquid chromatography coupled to mass spectrometry approach (LC-MSn). The byproducts were first washed, crushed and stored at negative temperature. The total phenolic compounds were then extracted by aqueous-ethanolic solvent in order to be quantified and characterized by LC-MS. According to the results obtained, the peel extract was the richest in phenolic compounds (1512.58 mg GAE/100 g DM) followed by the cladode extract (629.23 GAE/100 g DM) and finally by the seed extract (88.82 GAE/100 g DM) which is mainly used for its oil. The LC-MS analysis revealed diversity in phenolics in the three extracts and allowed the identification of hydroxybenzoic acids, hydroxycinnamic acids and flavonoids. The highest complexity was observed in the seed phenolic composition; more than twenty compounds were detected that belong to acids esters among which three feruloyl sucrose isomers. Sixteen compounds belonging to hydroxybenzoic acids, hydroxycinnamic acids and flavonoids were identified in the peel extract, whereas, only nine compounds were found in the cladode extract. It is interesting to highlight that the phenolic composition of the cladode extract was closer to that of the peel exact. However, from a quantitative viewpoint, the peel extract presented the highest amounts. Piscidic and eucomic acids were the two most concentrated molecules, corresponding to 271.3 and 121.6 mg GAE/ 100g DM respectively. The identified compounds were known to have high antioxidant and antiradical potential with the ability to inhibit lipid peroxidation and to exhibit a wide range of biological and therapeutic properties. The findings highlight the importance of using the Opuntia ficus-indica by-products.Keywords: characterization, LC-MSn analysis, Opuntia ficus-indica, phenolics
Procedia PDF Downloads 229386 The Regionalism Paradox in the Fight against Human Trafficking: Indonesia and the Limits of Regional Cooperation in ASEAN
Authors: Nur Iman Subono, Meidi Kosandi
Abstract:
This paper examines the role of regional cooperation in the Association of Southeast Asian Nations (ASEAN) in the fight against human trafficking for Indonesia. Many among scholars suggest that regional cooperation is necessary for combating human trafficking for its transnational and organized character as a crime against humanity. ASEAN members have been collectively active in responding transnational security issues with series of talks and collaboration agreement since early 2000s. Lately in 2015, ASEAN agreed on ASEAN Convention against Trafficking in Persons, particularly Women and Children (ACTIP) that requires each member to collaborate in information sharing and providing effective safeguard and protection of victims. Yet, the frequency of human trafficking crime occurrence remains high and tend to increase in Indonesian in 2017-2018. The objective of this paper is to examine the effectiveness and success of ACTIP implementation in the fight against human trafficking in Indonesia. Based on two years of research (2017-2018) in three provinces with the largest number of victims in Indonesia, this paper shows the tendency of persisting crime despite the implementation of regional and national anti-trafficking policies. The research was conducted by archive study, literature study, discourse analysis, and depth interviews with local government officials, police, prosecutors, victims, and traffickers. This paper argues that the relative success of ASEAN in establishing convention at the high-level meetings has not been followed with the success in its implementation in the society. Three main factors have contributed to the ineffectiveness of the agreements, i.e. (1) ASEAN institutional arrangement as a collection of sovereign states instead of supranational organization with binding authority; (2) the lack of commitment of ASEAN sovereign member-states to the agreements; and (3) the complexity and variety of the nature of the crime in each member-state. In effect, these factors have contributed to generating the regionalism paradox in ASEAN where states tend to revert to national policies instead of seeking regional collective solution.Keywords: human trafficking, transnational security, regionalism, anti trafficking policy
Procedia PDF Downloads 159385 Economic Factors Affecting Greenfield Petroleum Refinery and Petrochemical Projects in Africa
Authors: Daniel Muwooya
Abstract:
This paper analyses economic factors that have affected the competitiveness of petroleum refinery and petrochemical projects in sub-Saharan Africa in the past and continue to plague greenfield projects today. Traditional factors like plant sizing and complexity, low-capacity utilization, changing regulatory environment, and tighter product specifications have been important in the past. Additional factors include the development of excess refinery capacity in Asia and the growth of renewable sources of energy – especially for transportation. These factors create both challenges and opportunities for the development of greenfield refineries and petrochemical projects in areas of increased demand growth and new low-cost crude oil production – like sub-Saharan Africa. This paper evaluates the strategies available to project developers and host countries to address contemporary issues of energy transition and the apparent reduction of funds available for greenfield oil and gas projects. The paper also evaluates the structuring of greenfield refinery and petrochemical projects for limited recourse project finance bankability. The methodology of this paper includes analysis of current industry data, conference proceedings, academic papers, and academic books on the subjects of petroleum refinery economics, refinery financing, refinery operations, and project finance generally and specifically in the oil and gas industry; evaluation of expert opinions from journal articles; working papers from international bodies like the World Bank and the International Energy Agency; and experience from playing an active role in the development and financing of US$ 10 Billion greenfield oil development project in Uganda. The paper also applies the discounted cash flow modelling to illustrate the circumstances of an inland greenfield refinery project in Uganda. Greenfield refinery and petrochemical projects are still necessary in sub-Saharan Africa to, among other aspirations, support the transition from traditional sources of energy like biomass to such modern forms as liquefied petroleum gas. Project developers and host governments will be required to structure projects that support global climate change goals without occasioning undue delays to project execution.Keywords: financing, refinery and petrochemical economics, Africa, project finance
Procedia PDF Downloads 59384 Application of a Theoretical framework as a Context for a Travel Behavior Change Policy Intervention
Authors: F. Moghtaderi, M. Burke, J. Troelsen
Abstract:
There has been a significant decline in active travel as well as the massive increase use of car-dependent travel mode in many countries during past two decades. Evidential risks for people’s physical and mental health problems are followed by this increased use of motorized travel mode. These problems range from overweight and obesity to increasing air pollution. In response to these rising concerns, local councils and other interested organizations around the world have introduced a variety of initiatives regarding reduce the dominance of cars for the daily journeys. However, the nature of these kinds of interventions, which related to the human behavior, make lots of complexities. People’s travel behavior and changing this behavior, has two different aspects. People’s attitudes and perceptions toward the sustainable and healthy modes of travel, and motorized travel modes (especially private car use) is one these two aspects. The other one related to people’s behavior change processes. There are no comprehensive model in order to guide policy interventions to increase the level of succeed of such interventions. A comprehensive theoretical framework is required in accordance to facilitate and guide the processes of data collection and analysis to achieve the best possible guidelines for policy makers. Regarding this gaps in the travel behavior change research, this paper attempted to identify and suggest a multidimensional framework in order to facilitate planning interventions. A structured mixed-method is suggested regarding the expand the scope and improve the analytic power of the result according to the complexity of human behavior. In order to recognize people’s attitudes, a theory with the focus on people’s attitudes towards a particular travel behavior was needed. The literature around the theory of planned behavior (TPB) was the most useful, and had been proven to be a good predictor of behavior change. Another aspect of the research, related to the people’s decision-making process regarding explore guidelines for the further interventions. Therefore, a theory was needed to facilitate and direct the interventions’ design. The concept of the transtheoretical model of behavior change (TTM) was used regarding reach a set of useful guidelines for the further interventions with the aim to increase active travel and sustainable modes of travel. Consequently, a combination of these two theories (TTM and TPB) had presented as an appropriate concept to identify and design implemented travel behavior change interventions.Keywords: behavior change theories, theoretical framework, travel behavior change interventions, urban research
Procedia PDF Downloads 373383 Thermal Evaluation of Printed Circuit Board Design Options and Voids in Solder Interface by a Simulation Tool
Authors: B. Arzhanov, A. Correia, P. Delgado, J. Meireles
Abstract:
Quad Flat No-Lead (QFN) packages have become very popular for turners, converters and audio amplifiers, among others applications, needing efficient power dissipation in small footprints. Since semiconductor junction temperature (TJ) is a critical parameter in the product quality. And to ensure that die temperature does not exceed the maximum allowable TJ, a thermal analysis conducted in an earlier development phase is essential to avoid repeated re-designs process with huge losses in cost and time. A simulation tool capable to estimate die temperature of components with QFN package was developed. Allow establish a non-empirical way to define an acceptance criterion for amount of voids in solder interface between its exposed pad and Printed Circuit Board (PCB) to be applied during industrialization process, and evaluate the impact of PCB designs parameters. Targeting PCB layout designer as an end user for the application, a user-friendly interface (GUI) was implemented allowing user to introduce design parameters in a convenient and secure way and hiding all the complexity of finite element simulation process. This cost effective tool turns transparent a simulating process and provides useful outputs after acceptable time, which can be adopted by PCB designers, preventing potential risks during the design stage and make product economically efficient by not oversizing it. This article gathers relevant information related to the design and implementation of the developed tool, presenting a parametric study conducted with it. The simulation tool was experimentally validated using a Thermal-Test-Chip (TTC) in a QFN open-cavity, in order to measure junction temperature (TJ) directly on the die under controlled and knowing conditions. Providing a short overview about standard thermal solutions and impacts in exposed pad packages (i.e. QFN), accurately describe the methods and techniques that the system designer should use to achieve optimum thermal performance, and demonstrate the effect of system-level constraints on the thermal performance of the design.Keywords: QFN packages, exposed pads, junction temperature, thermal management and measurements
Procedia PDF Downloads 256382 An Exploratory Factor Analysis Approach to Explore Barriers to Oracy Proficiency among Thai EFL Learners
Authors: Patsawut Sukserm
Abstract:
Oracy proficiency, encompassing both speaking and listening skills, is vital for EFL learners, yet Thai university students often face significant challenges in developing these abilities. This study aims to identify and analyze the barriers that hinder oracy proficiency in EFL learners. To achieve this, a questionnaire was developed based on a comprehensive review of the literature and administered to a large cohort of Thai EFL students. The data were subjected to exploratory factor analysis (EFA) to validate the questionnaire and uncover the underlying factors influencing learners’ performance. The results revealed that the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was 0.912, and Bartlett’s test of sphericity was significant at 2345.423 (p < 0.05), confirming the suitability for factor analysis. There are five main barriers in oracy proficiency, namely Listening and Comprehension Obstacles (LCO), Accent and Speech Understanding (ASU), Speaking Anxiety and Confidence Issues (SACI), Fluency and Expression Issues (FEI), and Grammar and Conversational Understanding (GCU), with eigenvalues ranging from 1.066 to 12.990, explaining 60.305 % of the variance of the 32 variables. These findings highlight the complexity of the challenges faced by Thai EFL learners and emphasize the need for diverse and authentic listening experiences, a supportive classroom environment, or balanced grammar instruction. The findings of the study suggest that educators, curriculum developers, and policy makers should implement evidence-based strategies to address these barriers in order to improve Thai EFL learners’ oral proficiency and enhance their overall academic and professional success. Also, this study will discuss these findings in depth, offering evidence-based strategies for addressing these barriers. Recommendations include integrating diverse and authentic listening experiences, fostering a supportive classroom environment, and providing targeted instruction in both speaking fluency and grammar. The study’s implications extend to educators, curriculum developers, and policymakers, offering practical solutions to enhance learners’ oracy proficiency and support their academic and professional development.Keywords: exploratory factor analysis, barriers, oracy proficiency, EFL learners
Procedia PDF Downloads 21381 Jan’s Life-History: Changing Faces of Managerial Masculinities and Consequences for Health
Authors: Susanne Gustafsson
Abstract:
Life-history research is an extraordinarily fruitful method to use for social analysis and gendered health analysis in particular. Its potential is illustrated through a case study drawn from a Swedish project. It reveals an old type of masculinity that faces difficulties when carrying out two sets of demands simultaneously, as a worker/manager and as a father/husband. The paper illuminates the historical transformation of masculinity and the consequences of this for health. We draw on the idea of the “changing faces of masculinity” to explore the dynamism and complexity of gendered health. An empirical case is used for its illustrative abilities. Jan, a middle-level manager and father employed in the energy sector in urban Sweden is the subject of this paper. Jan’s story is one of 32 semi-structured interviews included in an extended study focusing on well-being at work. The results reveal a face of masculinity conceived of in middle-level management as tacitly linked to the neoliberal doctrine. Over a couple of decades, the idea of “flexibility” was turned into a valuable characteristic that everyone was supposed to strive for. This resulted in increased workloads. Quite a few employees, and managers, in particular, find themselves working both day and night. This may explain why not having enough time to spend with children and family members is a recurring theme in the data. Can this way of doing be linked to masculinity and health? The first author’s research has revealed that the use of gender in health science is not sufficiently or critically questioned. This lack of critical questioning is a serious problem, especially since ways of doing gender affect health. We suggest that gender reproduction and gender transformation are interconnected, regardless of how they affect health. They are recognized as two sides of the same phenomenon, and minor movements in one direction or the other become crucial for understanding its relation to health. More or less, at the same time, as Jan’s masculinity was reproduced in response to workplace practices, Jan’s family position was transformed—not totally but by a degree or two, and these degrees became significant for the family’s health and well-being. By moving back and forth between varied events in Jan’s biographical history and his sociohistorical life span, it becomes possible to show that in a time of gender transformations, power relations can be renegotiated, leading to consequences for health.Keywords: changing faces of masculinity, gendered health, life-history research method, subverter
Procedia PDF Downloads 110380 Partial Least Square Regression for High-Dimentional and High-Correlated Data
Authors: Mohammed Abdullah Alshahrani
Abstract:
The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data
Procedia PDF Downloads 49379 Macroeconomic Implications of Artificial Intelligence on Unemployment in Europe
Authors: Ahmad Haidar
Abstract:
Modern economic systems are characterized by growing complexity, and addressing their challenges requires innovative approaches. This study examines the implications of artificial intelligence (AI) on unemployment in Europe from a macroeconomic perspective, employing data modeling techniques to understand the relationship between AI integration and labor market dynamics. To understand the AI-unemployment nexus comprehensively, this research considers factors such as sector-specific AI adoption, skill requirements, workforce demographics, and geographical disparities. The study utilizes a panel data model, incorporating data from European countries over the last two decades, to explore the potential short-term and long-term effects of AI implementation on unemployment rates. In addition to investigating the direct impact of AI on unemployment, the study also delves into the potential indirect effects and spillover consequences. It considers how AI-driven productivity improvements and cost reductions might influence economic growth and, in turn, labor market outcomes. Furthermore, it assesses the potential for AI-induced changes in industrial structures to affect job displacement and creation. The research also highlights the importance of policy responses in mitigating potential negative consequences of AI adoption on unemployment. It emphasizes the need for targeted interventions such as skill development programs, labor market regulations, and social safety nets to enable a smooth transition for workers affected by AI-related job displacement. Additionally, the study explores the potential role of AI in informing and transforming policy-making to ensure more effective and agile responses to labor market challenges. In conclusion, this study provides a comprehensive analysis of the macroeconomic implications of AI on unemployment in Europe, highlighting the importance of understanding the nuanced relationships between AI adoption, economic growth, and labor market outcomes. By shedding light on these relationships, the study contributes valuable insights for policymakers, educators, and researchers, enabling them to make informed decisions in navigating the complex landscape of AI-driven economic transformation.Keywords: artificial intelligence, unemployment, macroeconomic analysis, european labor market
Procedia PDF Downloads 77378 Critical Evaluation of the Transformative Potential of Artificial Intelligence in Law: A Focus on the Judicial System
Authors: Abisha Isaac Mohanlal
Abstract:
Amidst all suspicions and cynicism raised by the legal fraternity, Artificial Intelligence has found its way into the legal system and has revolutionized the conventional forms of legal services delivery. Be it legal argumentation and research or resolution of complex legal disputes; artificial intelligence has crept into all legs of modern day legal services. Its impact has been largely felt by way of big data, legal expert systems, prediction tools, e-lawyering, automated mediation, etc., and lawyers around the world are forced to upgrade themselves and their firms to stay in line with the growth of technology in law. Researchers predict that the future of legal services would belong to artificial intelligence and that the age of human lawyers will soon rust. But as far as the Judiciary is concerned, even in the developed countries, the system has not fully drifted away from the orthodoxy of preferring Natural Intelligence over Artificial Intelligence. Since Judicial decision-making involves a lot of unstructured and rather unprecedented situations which have no single correct answer, and looming questions of legal interpretation arise in most of the cases, discretion and Emotional Intelligence play an unavoidable role. Added to that, there are several ethical, moral and policy issues to be confronted before permitting the intrusion of Artificial Intelligence into the judicial system. As of today, the human judge is the unrivalled master of most of the judicial systems around the globe. Yet, scientists of Artificial Intelligence claim that robot judges can replace human judges irrespective of how daunting the complexity of issues is and how sophisticated the cognitive competence required is. They go on to contend that even if the system is too rigid to allow robot judges to substitute human judges in the recent future, Artificial Intelligence may still aid in other judicial tasks such as drafting judicial documents, intelligent document assembly, case retrieval, etc., and also promote overall flexibility, efficiency, and accuracy in the disposal of cases. By deconstructing the major challenges that Artificial Intelligence has to overcome in order to successfully invade the human- dominated judicial sphere, and critically evaluating the potential differences it would make in the system of justice delivery, the author tries to argue that penetration of Artificial Intelligence into the Judiciary could surely be enhancive and reparative, if not fully transformative.Keywords: artificial intelligence, judicial decision making, judicial systems, legal services delivery
Procedia PDF Downloads 224377 Comparison of Monte Carlo Simulations and Experimental Results for the Measurement of Complex DNA Damage Induced by Ionizing Radiations of Different Quality
Authors: Ifigeneia V. Mavragani, Zacharenia Nikitaki, George Kalantzis, George Iliakis, Alexandros G. Georgakilas
Abstract:
Complex DNA damage consisting of a combination of DNA lesions, such as Double Strand Breaks (DSBs) and non-DSB base lesions occurring in a small volume is considered as one of the most important biological endpoints regarding ionizing radiation (IR) exposure. Strong theoretical (Monte Carlo simulations) and experimental evidence suggests an increment of the complexity of DNA damage and therefore repair resistance with increasing linear energy transfer (LET). Experimental detection of complex (clustered) DNA damage is often associated with technical deficiencies limiting its measurement, especially in cellular or tissue systems. Our groups have recently made significant improvements towards the identification of key parameters relating to the efficient detection of complex DSBs and non-DSBs in human cellular systems exposed to IR of varying quality (γ-, X-rays 0.3-1 keV/μm, α-particles 116 keV/μm and 36Ar ions 270 keV/μm). The induction and processing of DSB and non-DSB-oxidative clusters were measured using adaptations of immunofluorescence (γH2AX or 53PB1 foci staining as DSB probes and human repair enzymes OGG1 or APE1 as probes for oxidized purines and abasic sites respectively). In the current study, Relative Biological Effectiveness (RBE) values for DSB and non-DSB induction have been measured in different human normal (FEP18-11-T1) and cancerous cell lines (MCF7, HepG2, A549, MO59K/J). The experimental results are compared to simulation data obtained using a validated microdosimetric fast Monte Carlo DNA Damage Simulation code (MCDS). Moreover, this simulation approach is implemented in two realistic clinical cases, i.e. prostate cancer treatment using X-rays generated by a linear accelerator and a pediatric osteosarcoma case using a 200.6 MeV proton pencil beam. RBE values for complex DNA damage induction are calculated for the tumor areas. These results reveal a disparity between theory and experiment and underline the necessity for implementing highly precise and more efficient experimental and simulation approaches.Keywords: complex DNA damage, DNA damage simulation, protons, radiotherapy
Procedia PDF Downloads 325376 Differentiated Instruction for All Learners: Strategies for Full Inclusion
Authors: Susan Dodd
Abstract:
This presentation details the methodology for teachers to identify and support a population of students who have historically been overlooked in regards to their educational needs. The twice exceptional (2e) student is a learner who is considered gifted and also has a learning disability, as defined by the Individuals with Disabilities Education Act (IDEA). Many of these students remain underserved throughout their educational careers because their exceptionalities may mask each other, resulting in a special population of students who are not achieving to their fullest potential. There are three common scenarios that may make the identification of a 2e student challenging. First, the student may have been identified as gifted, and her disability may go unnoticed. She could also be considered an under-achiever, or she may be able to compensate for her disability under the school works becomes more challenging. In the second scenario, the student may be identified as having a learning disability and is only receiving remedial services where his giftedness will not be highlighted. His overall IQ scores may be misleading because they were impacted by his learning disability. In the third scenario, the student is able to compensate for her ability well enough to maintain average scores, and she goes undetected as both gifted and learning disabled. Research in the area identifies the complexity involved in identifying 2e students, and how multiple forms of assessment are required. It is important for teachers to be aware of the common characteristics exhibited by many 2e students, so these learners can be identified and appropriately served. Once 2e students have been identified, teachers are then challenged to meet the varying needs of these exceptional learners. Strength-based teaching entails simultaneously providing gifted instruction as well as individualized accommodations for those students. Research in this field has yielded strategies that have proven helpful for teaching 2e students, as well as other students who may be struggling academically. Differentiated instruction, while necessary in all classrooms, is especially important for 2e students, as is encouragement for academic success. Teachers who take the time to really know their students will have a better understanding of each student’s strengths and areas for growth, and therefore tailor instruction to extend the intellectual capacities for optimal achievement. Teachers should also understand that some learning activities can prove very frustrating to students, and these activities can be modified based on individual student needs. Because 2e students can often become discouraged by their learning challenges, it is especially important for teachers to assist students in recognizing their own strengths and maintaining motivation for learning. Although research on the needs of 2e students has spanned across two decades, this population remains underserved in many educational institutions. Teacher awareness of the identification of and the support strategies for 2e students is critical for their success.Keywords: gifted, learning disability, special needs, twice exceptional
Procedia PDF Downloads 179375 Improving Functionality of Radiotherapy Department Through: Systemic Periodic Clinical Audits
Authors: Kamal Kaushik, Trisha, Dandapni, Sambit Nanda, A. Mukherjee, S. Pradhan
Abstract:
INTRODUCTION: As complexity in radiotherapy practice and processes are increasing, there is a need to assure quality control to a greater extent. At present, no international literature available with regards to the optimal quality control indicators for radiotherapy; moreover, few clinical audits have been conducted in the field of radiotherapy. The primary aim is to improve the processes that directly impact clinical outcomes for patients in terms of patient safety and quality of care. PROCEDURE: A team of an Oncologist, a Medical Physicist and a Radiation Therapist was formed for weekly clinical audits of patient’s undergoing radiotherapy audits The stages for audits include Pre planning audits, Simulation, Planning, Daily QA, Implementation and Execution (with image guidance). Errors in all the parts of the chain were evaluated and recorded for the development of further departmental protocols for radiotherapy. EVALUATION: The errors at various stages of radiotherapy chain were evaluated and recorded for comparison before starting the clinical audits in the department of radiotherapy and after starting the audits. It was also evaluated to find the stage in which maximum errors were recorded. The clinical audits were used to structure standard protocols (in the form of checklist) in department of Radiotherapy, which may lead to further reduce the occurrences of clinical errors in the chain of radiotherapy. RESULTS: The aim of this study is to perform a comparison between number of errors in different part of RT chain in two groups (A- Before Audit and B-After Audit). Group A: 94 pts. (48 males,46 female), Total no. of errors in RT chain:19 (9 needed Resimulation) Group B: 94 pts. (61 males,33 females), Total no. of errors in RT chain: 8 (4 needed Resimulation) CONCLUSION: After systematic periodic clinical audits percentage of error in radiotherapy process reduced more than 50% within 2 months. There is a great need in improving quality control in radiotherapy, and the role of clinical audits can only grow. Although clinical audits are time-consuming and complex undertakings, the potential benefits in terms of identifying and rectifying errors in quality control procedures are potentially enormous. Radiotherapy being a chain of various process. There is always a probability of occurrence of error in any part of the chain which may further propagate in the chain till execution of treatment. Structuring departmental protocols and policies helps in reducing, if not completely eradicating occurrence of such incidents.Keywords: audit, clinical, radiotherapy, improving functionality
Procedia PDF Downloads 88374 Impact of Mixing Parameters on Homogenization of Borax Solution and Nucleation Rate in Dual Radial Impeller Crystallizer
Authors: A. Kaćunić, M. Ćosić, N. Kuzmanić
Abstract:
Interaction between mixing and crystallization is often ignored despite the fact that it affects almost every aspect of the operation including nucleation, growth, and maintenance of the crystal slurry. This is especially pronounced in multiple impeller systems where flow complexity is increased. By choosing proper mixing parameters, what closely depends on the knowledge of the hydrodynamics in a mixing vessel, the process of batch cooling crystallization may considerably be improved. The values that render useful information when making this choice are mixing time and power consumption. The predominant motivation for this work was to investigate the extent to which radial dual impeller configuration influences mixing time, power consumption and consequently the values of metastable zone width and nucleation rate. In this research, crystallization of borax was conducted in a 15 dm3 baffled batch cooling crystallizer with an aspect ratio (H/T) of 1.3. Mixing was performed using two straight blade turbines (4-SBT) mounted on the same shaft that generated radial fluid flow. Experiments were conducted at different values of N/NJS ratio (impeller speed/ minimum impeller speed for complete suspension), D/T ratio (impeller diameter/crystallizer diameter), c/D ratio (lower impeller off-bottom clearance/impeller diameter), and s/D ratio (spacing between impellers/impeller diameter). Mother liquor was saturated at 30°C and was cooled at the rate of 6°C/h. Its concentration was monitored in line by Na-ion selective electrode. From the values of supersaturation that was monitored continuously over process time, it was possible to determine the metastable zone width and subsequently the nucleation rate using the Mersmann’s nucleation criterion. For all applied dual impeller configurations, the mixing time was determined by potentiometric method using a pulse technique, while the power consumption was determined using a torque meter produced by Himmelstein & Co. Results obtained in this investigation show that dual impeller configuration significantly influences the values of mixing time, power consumption as well as the metastable zone width and nucleation rate. A special attention should be addressed to the impeller spacing considering the flow interaction that could be more or less pronounced depending on the spacing value.Keywords: dual impeller crystallizer, mixing time, power consumption, metastable zone width, nucleation rate
Procedia PDF Downloads 296373 Saving the Decolonized Subject from Neglected Tropical Diseases: Public Health Campaign and Household-Centred Sanitation in Colonial West Africa, 1900-1960
Authors: Adebisi David Alade
Abstract:
In pre-colonial West Africa, the deadliness of the climate vis-a- vis malaria and other tropical diseases to Europeans turned the region into the “white man’s grave.” Thus, immediately after the partition of Africa in 1885, civilisatrice and mise en valeur not only became a pretext for the establishment of colonial rule; from a medical point of view, the control and possible eradication of disease in the continent emerged as one of the first concerns of the European colonizers. Though geared toward making Africa exploitable, historical evidence suggests that some colonial Water, Sanitation and Hygiene (WASH) policies and projects reduced certain tropical diseases in some West African communities. Exploring some of these disease control interventions by way of historical revisionism, this paper challenges the orthodox interpretation of colonial sanitation and public health measures in West Africa. This paper critiques the deployment of race and class as analytical tools for the study of colonial WASH projects, an exercise which often reduces the complexity and ambiguity of colonialism to the binary of colonizer and the colonized. Since West Africa presently ranks high among regions with Neglected Tropical Diseases (NTDs), it is imperative to decentre colonial racism and economic exploitation in African history in order to give room for Africans to see themselves in other ways. Far from resolving the problem of NTDs by fiat in the region, this study seeks to highlight important blind spots in African colonial history in an attempt to prevent post-colonial African leaders from throwing away the baby with the bath water. As scholars researching colonial sanitation and public health in the continent rarely examine its complex meaning and content, this paper submits that the outright demonization of colonial rule across space and time continues to build ideological wall between the present and the past which not only inhibit fruitful borrowing from colonial administration of West Africa, but also prevents a wide understanding of the challenges of WASH policies and projects in most West African states.Keywords: colonial rule, disease control, neglected tropical diseases, WASH
Procedia PDF Downloads 187372 Exercise and Geriatric Depression: a Scoping Review of the Research Evidence
Authors: Samira Mehrabi
Abstract:
Geriatric depression is a common late-life mental health disorder that increases morbidity and mortality. It has been shown that exercise is effective in alleviating symptoms of geriatric depression. However, inconsistencies across studies and lack of optimal dose-response of exercise for improving geriatric depression have made it challenging to draw solid conclusions on the effectiveness of exercise in late-life depression. Purpose: To further investigate the moderators of the effectiveness of exercise on geriatric depression across the current body of evidence. Methods: Based on the Arksey and O’Malley framework, an extensive search strategy was performed by exploring PubMed, Scopus, Sport Discus, PsycInfo, ERIC, and IBSS without limitations in the time frame. Eight systematic reviews with empirical results that evaluated the effect of exercise on depression among people aged ≥ 60 years were identified and their individual studies were screened for inclusion. One additional study was found through the hand searching of reference lists. After full-text screening and applying inclusion and exclusion criteria, 21 studies were retained for inclusion. Results: The review revealed high variability in characteristics of the exercise interventions and outcome measures. Sample characteristics, nature of comparators, main outcome assessment, and baseline severity of depression also varied notably. Mind-body and aerobic exercises were found to significantly reduce geriatric depression. However, results on the relationship between resistance training and improvements in geriatric depression were inconsistent, and results of the intensity-related antidepressant effects of exercise interventions were mixed. Extensive use of self-reported questionnaires for the main outcome assessment and lack of evidence on the relationship between depression severity and observed effects were of the other important highlights of the review. Conclusion: Several literature gaps were found regarding the potential effect modifiers of exercise and geriatric depression. While acknowledging the complexity of establishing recommendations on the exercise variables and geriatric depression, future studies are required to understand the interplay and threshold effect of exercise for treating geriatric depression.Keywords: exercise, geriatric depression, healthy aging, older adults, physical activity intervention, scoping review
Procedia PDF Downloads 107371 Engineering a Tumor Extracellular Matrix Towards an in vivo Mimicking 3D Tumor Microenvironment
Authors: Anna Cameron, Chunxia Zhao, Haofei Wang, Yun Liu, Guang Ze Yang
Abstract:
Since the first publication in 1775, cancer research has built a comprehensive understanding of how cellular components of the tumor niche promote disease development. However, only within the last decade has research begun to establish the impact of non-cellular components of the niche, particularly the extracellular matrix (ECM). The ECM, a three-dimensional scaffold that sustains the tumor microenvironment, plays a crucial role in disease progression. Cancer cells actively deregulate and remodel the ECM to establish a tumor-promoting environment. Recent work has highlighted the need to further our understanding of the complexity of this cancer-ECM relationship. In vitro models use hydrogels to mimic the ECM, as hydrogel matrices offer biological compatibility and stability needed for long term cell culture. However, natural hydrogels are being used in these models verbatim, without tuning their biophysical characteristics to achieve pathophysiological relevance, thus limiting their broad use within cancer research. The biophysical attributes of these gels dictate cancer cell proliferation, invasion, metastasis, and therapeutic response. Evaluating the three most widely used natural hydrogels, Matrigel, collagen, and agarose gel, the permeability, stiffness, and pore-size of each gel were measured and compared to the in vivo environment. The pore size of all three gels fell between 0.5-6 µm, which coincides with the 0.1-5 µm in vivo pore size found in the literature. However, the stiffness for hydrogels able to support cell culture ranged between 0.05 and 0.3 kPa, which falls outside the range of 0.3-20,000 kPa reported in the literature for an in vivo ECM. Permeability was ~100x greater than in vivo measurements, due in large part to the lack of cellular components which impede permeation. Though, these measurements prove important when assessing therapeutic particle delivery, as the ECM permeability decreased with increasing particle size, with 100 nm particles exhibiting a fifth of the permeability of 10 nm particles. This work explores ways of adjusting the biophysical characteristics of hydrogels by changing protein concentration and the trade-off, which occurs due to the interdependence of these factors. The global aim of this work is to produce a more pathophysiologically relevant model for each tumor type.Keywords: cancer, extracellular matrix, hydrogel, microfluidic
Procedia PDF Downloads 91370 Mathematical Study of CO₂ Dispersion in Carbonated Water Injection Enhanced Oil Recovery Using Non-Equilibrium 2D Simulator
Authors: Ahmed Abdulrahman, Jalal Foroozesh
Abstract:
CO₂ based enhanced oil recovery (EOR) techniques have gained massive attention from major oil firms since they resolve the industry's two main concerns of CO₂ contribution to the greenhouse effect and the declined oil production. Carbonated water injection (CWI) is a promising EOR technique that promotes safe and economic CO₂ storage; moreover, it mitigates the pitfalls of CO₂ injection, which include low sweep efficiency, early CO₂ breakthrough, and the risk of CO₂ leakage in fractured formations. One of the main challenges that hinder the wide adoption of this EOR technique is the complexity of accurate modeling of the kinetics of CO₂ mass transfer. The mechanisms of CO₂ mass transfer during CWI include the slow and gradual cross-phase CO₂ diffusion from carbonated water (CW) to the oil phase and the CO₂ dispersion (within phase diffusion and mechanical mixing), which affects the oil physical properties and the spatial spreading of CO₂ inside the reservoir. A 2D non-equilibrium compositional simulator has been developed using a fully implicit finite difference approximation. The material balance term (k) was added to the governing equation to account for the slow cross-phase diffusion of CO₂ from CW to the oil within the gird cell. Also, longitudinal and transverse dispersion coefficients have been added to account for CO₂ spatial distribution inside the oil phase. The CO₂-oil diffusion coefficient was calculated using the Sigmund correlation, while a scale-dependent dispersivity was used to calculate CO₂ mechanical mixing. It was found that the CO₂-oil diffusion mechanism has a minor impact on oil recovery, but it tends to increase the amount of CO₂ stored inside the formation and slightly alters the residual oil properties. On the other hand, the mechanical mixing mechanism has a huge impact on CO₂ spatial spreading (accurate prediction of CO₂ production) and the noticeable change in oil physical properties tends to increase the recovery factor. A sensitivity analysis has been done to investigate the effect of formation heterogeneity (porosity, permeability) and injection rate, it was found that the formation heterogeneity tends to increase CO₂ dispersion coefficients, and a low injection rate should be implemented during CWI.Keywords: CO₂ mass transfer, carbonated water injection, CO₂ dispersion, CO₂ diffusion, cross phase CO₂ diffusion, within phase CO2 diffusion, CO₂ mechanical mixing, non-equilibrium simulation
Procedia PDF Downloads 176369 Influence of Smoking on Fine And Ultrafine Air Pollution Pm in Their Pulmonary Genetic and Epigenetic Toxicity
Authors: Y. Landkocz, C. Lepers, P.J. Martin, B. Fougère, F. Roy Saint-Georges. A. Verdin, F. Cazier, F. Ledoux, D. Courcot, F. Sichel, P. Gosset, P. Shirali, S. Billet
Abstract:
In 2013, the International Agency for Research on Cancer (IARC) classified air pollution and fine particles as carcinogenic to humans. Causal relationships exist between elevated ambient levels of airborne particles and increase of mortality and morbidity including pulmonary diseases, like lung cancer. However, due to a double complexity of both physicochemical Particulate Matter (PM) properties and tumor mechanistic processes, mechanisms of action remain not fully elucidated. Furthermore, because of several common properties between air pollution PM and tobacco smoke, like the same route of exposure and chemical composition, potential mechanisms of synergy could exist. Therefore, smoking could be an aggravating factor of the particles toxicity. In order to identify some mechanisms of action of particles according to their size, two samples of PM were collected: PM0.03 2.5 and PM0.33 2.5 in the urban-industrial area of Dunkerque. The overall cytotoxicity of the fine particles was determined on human bronchial cells (BEAS-2B). Toxicological study focused then on the metabolic activation of the organic compounds coated onto PM and some genetic and epigenetic changes induced on a co-culture model of BEAS-2B and alveolar macrophages isolated from bronchoalveolar lavages performed in smokers and non-smokers. The results showed (i) the contribution of the ultrafine fraction of atmospheric particles to genotoxic (eg. DNA double-strand breaks) and epigenetic mechanisms (eg. promoter methylation) involved in tumor processes, and (ii) the influence of smoking on the cellular response. Three main conclusions can be discussed. First, our results showed the ability of the particles to induce deleterious effects potentially involved in the stages of initiation and promotion of carcinogenesis. The second conclusion is that smoking affects the nature of the induced genotoxic effects. Finally, the in vitro developed cell model, using bronchial epithelial cells and alveolar macrophages can take into account quite realistically, some of the existing cell interactions existing in the lung.Keywords: air pollution, fine and ultrafine particles, genotoxic and epigenetic alterations, smoking
Procedia PDF Downloads 347368 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 166367 Split Health System for Diabetes Care in Urban Area: Experience from an Action Research Project in an Urban Poor Neighborhood in Bengaluru
Authors: T. S. Beerenahally, S. Amruthavalli, C. M. Munegowda, Leelavathi, Nagarathna
Abstract:
Introduction: In majority of urban India, the health system is split between different authorities being responsible for the health care of urban population. We believe that, apart from poor awareness and financial barriers to care, there are other health system barriers which affect quality and access to care for people with diabetes. In this paper, we attempted to identify health system complexity that determines access to public health system for diabetes care in KG Halli, a poor urban neighborhood in Bengaluru. The KG Halli has been a locus of a health systems research from 2009 to 2015. Methodology: The source of data is from the observational field-notes written by research team as part of urban health action research project (UHARP). Field notes included data from the community and the public primary care center. The data was generated by the community health assistants and the other research team members during regular home visits and interaction with individuals who self-reported to be diabetic over four years as part of UHARP. Results: It emerged during data analysis that the patients were not keen on utilizing primary public health center for many reasons. Patient has felt that the service provided at the center was not integrated. There was lack of availability of medicines, with a regular stock out of medicines in a year and laboratory service for investigation was limited. Many of them said that the time given by the providers was not sufficient and there was also a feeling of providers not listening to them attentively. The power dynamics played a huge role in communication. Only the consultation was available for free of cost at the public primary care center. The patient had to spend for the investigations and the major portion for medicine. Conclusion: Diabetes is a chronic disease that poses an important emerging public health concern. Most of the financial burden is borne by the family as the public facilities have failed to provide free care in India. Our study indicated various factors including individual beliefs, stigma and financial constraints affecting compliance to diabetes care.Keywords: diabetes care, disintegrated health system, quality of care, urban health
Procedia PDF Downloads 160366 Money Laundering and Terror Financing in the Islamic Banking Sector in Bangladesh
Authors: Md. Abdul Kader
Abstract:
Several reports released by Global Financial Integrity (GFI) in recent times have identified Bangladesh as being among the worst affected countries to the scourge of money laundering (ML) and terrorist financing (TF). The money laundering (ML) and terrorist financing (TF) risks associated with conventional finance are generally well identified and understood by the relevant national authorities. There is, however, no common understanding of ML/TF risks associated with Islamic Banking. This paper attempts to examine the issues of money laundering (ML) and terrorist financing (TF) in Islamic Banks of Bangladesh. This study also investigates the risk factors associated with Islamic Banking system of Bangladesh that are favorable for ML and TF and which prevent the government to control such issues in the Islamic Banks of Bangladesh. Qualitative research methods were employed by studying various reports from journals, newspapers, bank reports and periodicals. In addition, five ex-bankers who were in the policy making bodies of three Islamic Banks were also interviewed. Findings suggest that government policies regarding Islamic Banking system in Bangladesh are not well defined and clear. Shariah law, that is the guiding principle of Islamic Banking, is not well recognized by the government policy makers, and thus they left the responsibility to the governing bodies of the banks. Other challenges that were found in the study are: the complexity of some Islamic banking products, the different forms of relationship between the banks and their clients, the inadequate ability and skill in the supervision of Islamic finance, particularly in jurisdictions, to evaluate their activities. All these risk factors paved the ground for ML and TF in the Islamic Banks of Bangladesh. However, due to unconventional nature of Banking and lack of investigative reporting on Islamic Banking, this study could not cover the whole picture of the ML/TF of Islamic Banks of Bangladesh. However, both qualitative documents and interviewees confirmed that Islamic Banking in Bangladesh could be branded as risky when it comes to money laundering and terror financing. This study recommends that the central bank authorities who supervise Islamic finance and the government policy makers should obtain a greater understanding of the specific ML/TF risks that may arise in Islamic Banks and develop a proper response. The study findings are expected to considerably impact Islamic banking management and policymakers to develop strong and appropriate policy to enhance transparency, accountability, and efficiency in banking sector. The regulatory bodies can consider the findings to disseminate anti money laundering and terror financing related rules and regulations.Keywords: money laundering, terror financing, islamic banking, bangladesh
Procedia PDF Downloads 94365 Hybrid Strategies of Crisis Intervention for Sexualized Violence Using Digital Media
Authors: Katharina Kargel, Frederic Vobbe
Abstract:
Sexualized violence against children and adolescents using digital media poses particular challenges for practitioners with a focus on crisis intervention (social work, psychotherapy, law enforcement). The technical delimitation of violence increases the burden on those affected and increases the complexity of interdisciplinary cooperation. Urgently needed recommendations for practical action do not yet exist in Germany. Funded by the Federal Ministry of Education and Research, these recommendations for action are being developed in the HUMAN project together with science and practice. The presentation introduces the participatory approach of the HUMAN project. We discuss the application-oriented, casuistic approach of the project and present its results using the example of concrete case-based recommendations for Action. The participants will be presented with concrete prototypical case studies from the project, which will be used to illustrate quality criteria for crisis intervention in cases of sexualized violence using digital media. On the basis of case analyses, focus group interviews and interviews with victims of violence, we present the six central challenges of sexualized violence with the use of digital media, namely: • Diffusion (Ambiguities regarding the extent and significance of violence) , • Transcendence (Space and time independence of the dynamics of violence, omnipresence), • omnipresent anxiety (considering diffusion and transcendence), • being haunted (repeated confrontation with digital memories of violence or the perpetrator), • disparity (conflicts of interpretative power between those affected and the social environment) • simultaneity (of all other factors). We point out generalizable principles with which these challenges can be dealt with professionally. Dealing professionally with sexualized violence using digital media requires a stronger networking of professional actors. A clear distinction must be made between their own mission and the mission of the network partners. Those affected by violence must be shown options for crisis intervention in the context of the aid networks. The different competencies and the professional mission of the offers of help are to be made transparent. The necessity of technical possibilities for deleting abuse images beyond criminal prosecution will be discussed. Those affected are stabilized by multimodal strategies such as a combination of rational emotive therapy, legal support and technical assistance.Keywords: sexualized violence, intervention, digital media, children and youth
Procedia PDF Downloads 233364 Investigating the Effect of Orthographic Transparency on Phonological Awareness in Bilingual Children with Dyslexia
Authors: Sruthi Raveendran
Abstract:
Developmental dyslexia, characterized by reading difficulties despite normal intelligence, presents a significant challenge for bilingual children navigating languages with varying degrees of orthographic transparency. This study bridges a critical gap in dyslexia interventions for bilingual populations in India by examining how consistency and predictability of letter-sound relationships in a writing system (orthographic transparency) influence the ability to understand and manipulate the building blocks of sound in language (phonological processing). The study employed a computerized visual rhyme-judgment task with concurrent EEG (electroencephalogram) recording. The task compared reaction times, accuracy of performance, and event-related potential (ERP) components (N170, N400, and LPC) for rhyming and non-rhyming stimuli in two orthographies: English (opaque orthography) and Kannada (transparent orthography). As hypothesized, the results revealed advantages in phonological processing tasks for transparent orthography (Kannada). Children with dyslexia were faster and more accurate when judging rhymes in Kannada compared to English. This suggests that a language with consistent letter-sound relationships (transparent orthography) facilitates processing, especially for tasks that involve manipulating sounds within words (rhyming). Furthermore, brain activity measured by event-related potentials (ERP) showed less effort required for processing words in Kannada, as reflected by smaller N170, N400, and LPC amplitudes. These findings highlight the crucial role of orthographic transparency in optimizing reading performance for bilingual children with dyslexia. These findings emphasize the need for language-specific intervention strategies that consider the unique linguistic characteristics of each language. While acknowledging the complexity of factors influencing dyslexia, this research contributes valuable insights into the impact of orthographic transparency on phonological awareness in bilingual children. This knowledge paves the way for developing tailored interventions that promote linguistic inclusivity and optimize literacy outcomes for children with dyslexia.Keywords: developmental dyslexia, phonological awareness, rhyme judgment, orthographic transparency, Kannada, English, N170, N400, LPC
Procedia PDF Downloads 7363 The Impact of Artificial Intelligence on Digital Factory
Authors: Mona Awad Wanis Gad
Abstract:
The method of factory making plans has changed loads, in particular, whilst it's miles approximately making plans the factory building itself. Factory making plans have the venture of designing merchandise, plants, tactics, organization, regions, and the construction of a factory. Ordinary restructuring is turning into greater essential for you to preserve the competitiveness of a manufacturing unit. Regulations in new regions, shorter lifestyle cycles of product and manufacturing era, in addition to a VUCA global (Volatility, Uncertainty, Complexity and Ambiguity) cause extra common restructuring measures inside a factory. A digital factory model is the planning foundation for rebuilding measures and turns into a critical device. Furthermore, digital building fashions are increasingly being utilized in factories to help facility management and manufacturing processes. First, exclusive styles of digital manufacturing unit fashions are investigated, and their residences and usabilities to be used instances are analyzed. Within the scope of research are point cloud fashions, building statistics fashions, photogrammetry fashions, and those enriched with sensor information are tested. It investigated which digital fashions permit a simple integration of sensor facts and in which the variations are. In the end, viable application areas of virtual manufacturing unit models are determined by a survey, and the respective digital manufacturing facility fashions are assigned to the application areas. Ultimately, an application case from upkeep is selected and implemented with the assistance of the best virtual factory version. It is shown how a completely digitalized preservation process can be supported by a digital manufacturing facility version by offering facts. Among different functions, the virtual manufacturing facility version is used for indoor navigation, facts provision, and display of sensor statistics. In summary, the paper suggests a structuring of virtual factory fashions that concentrates on the geometric representation of a manufacturing facility building and its technical facilities. A practical application case is proven and implemented. For that reason, the systematic selection of virtual manufacturing facility models with the corresponding utility cases is evaluated.Keywords: augmented reality, digital factory model, factory planning, restructuring digital factory model, photogrammetry, factory planning, restructuring building information modeling, digital factory model, factory planning, maintenance
Procedia PDF Downloads 37