Search results for: mixed models
223 Shocks and Flows - Employing a Difference-In-Difference Setup to Assess How Conflicts and Other Grievances Affect the Gender and Age Composition of Refugee Flows towards Europe
Authors: Christian Bruss, Simona Gamba, Davide Azzolini, Federico Podestà
Abstract:
In this paper, the authors assess the impact of different political and environmental shocks on the size and on the age and gender composition of asylum-related migration flows to Europe. With this paper, the authors contribute to the literature by looking at the impact of different political and environmental shocks on the gender and age composition of migration flows in addition to the size of these flows. Conflicting theories predict different outcomes concerning the relationship between political and environmental shocks and the migration flows composition. Analyzing the relationship between the causes of migration and the composition of migration flows could yield more insights into the mechanisms behind migration decisions. In addition, this research may contribute to better informing national authorities in charge of receiving these migrant, as women and children/the elderly require different assistance than young men. To be prepared to offer the correct services, the relevant institutions have to be aware of changes in composition based on the shock in question. The authors analyze the effect of different types of shocks on the number, the gender and age composition of first time asylum seekers originating from 154 sending countries. Among the political shocks, the authors consider: violence between combatants, violence against civilians, infringement of political rights and civil liberties, and state terror. Concerning environmental shocks, natural disasters (such as droughts, floods, epidemics, etc.) have been included. The data on asylum seekers applying to any of the 32 Schengen Area countries between 2008 and 2015 is on a monthly basis. Data on asylum applications come from Eurostat, data on shocks are retrieved from various sources: georeferenced conflict data come from the Uppsala Conflict Data Program (UCDP), data on natural disasters from the Centre for Research on the Epidemiology of Disasters (CRED), data on civil liberties and political rights from Freedom House, data on state terror from the Political Terror Scale (PTS), GDP and population data from the World Bank, and georeferenced population data from the Socioeconomic Data and Applications Center (SEDAC). The authors adopt a Difference-in-Differences identification strategy, exploiting the different timing of several kinds of shocks across countries. The highly skewed distribution of the dependent variable is taken into account by using count data models. In particular, a Zero Inflated Negative Binomial model is adopted. Preliminary results show that different shocks - such as armed conflict and epidemics - exert weak immediate effects on asylum-related migration flows and almost non-existent effects on the gender and age composition. However, this result is certainly affected by the fact that no time lags have been introduced so far. Finding the correct time lags depends on a great many variables not limited to distance alone. Therefore, finding the appropriate time lags is still a work in progress. Considering the ongoing refugee crisis, this topic is more important than ever. The authors hope that this research contributes to a less emotionally led debate.Keywords: age, asylum, Europe, forced migration, gender
Procedia PDF Downloads 262222 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 78221 Policy Views of Sustainable Integrated Solution for Increased Synergy between Light Railways and Electrical Distribution Network
Authors: Mansoureh Zangiabadi, Shamil Velji, Rajendra Kelkar, Neal Wade, Volker Pickert
Abstract:
The EU has set itself a long-term goal of reducing greenhouse gas emissions by 80-95% of the 1990 levels by 2050 as set in the Energy Roadmap 2050. This paper reports on the European Union H2020 funded E-Lobster project which demonstrates tools and technologies, software and hardware in integrating the grid distribution, and the railway power systems with power electronics technologies (Smart Soft Open Point - sSOP) and local energy storage. In this context this paper describes the existing policies and regulatory frameworks of the energy market at European level with a special focus then at National level, on the countries where the members of the consortium are located, and where the demonstration activities will be implemented. By taking into account the disciplinary approach of E-Lobster, the main policy areas investigated includes electricity, energy market, energy efficiency, transport and smart cities. Energy storage will play a key role in enabling the EU to develop a low-carbon electricity system. In recent years, Energy Storage System (ESSs) are gaining importance due to emerging applications, especially electrification of the transportation sector and grid integration of volatile renewables. The need for storage systems led to ESS technologies performance improvements and significant price decline. This allows for opening a new market where ESSs can be a reliable and economical solution. One such emerging market for ESS is R+G management which will be investigated and demonstrated within E-Lobster project. The surplus of energy in one type of power system (e.g., due to metro braking) might be directly transferred to the other power system (or vice versa). However, it would usually happen at unfavourable instances when the recipient does not need additional power. Thus, the role of ESS is to enhance advantages coming from interconnection of the railway power systems and distribution grids by offering additional energy buffer. Consequently, the surplus/deficit of energy in, e.g. railway power systems, is not to be immediately transferred to/from the distribution grid but it could be stored and used when it is really needed. This will assure better energy management exchange between the railway power systems and distribution grids and lead to more efficient loss reduction. In this framework, to identify the existing policies and regulatory frameworks is crucial for the project activities and for the future development of business models for the E-Lobster solutions. The projections carried out by the European Commission, the Member States and stakeholders and their analysis indicated some trends, challenges, opportunities and structural changes needed to design the policy measures to provide the appropriate framework for investors. This study will be used as reference for the discussion in the envisaged workshops with stakeholders (DSOs and Transport Managers) in the E-Lobster project.Keywords: light railway, electrical distribution network, Electrical Energy Storage, policy
Procedia PDF Downloads 136220 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach
Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi
Abstract:
Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.
Procedia PDF Downloads 73219 Beyond Personal Evidence: Using Learning Analytics and Student Feedback to Improve Learning Experiences
Authors: Shawndra Bowers, Allie Brandriet, Betsy Gilbertson
Abstract:
This paper will highlight how Auburn Online’s instructional designers leveraged student and faculty data to update and improve online course design and instructional materials. When designing and revising online courses, it can be difficult for faculty to know what strategies are most likely to engage learners and improve educational outcomes in a specific discipline. It can also be difficult to identify which metrics are most useful for understanding and improving teaching, learning, and course design. At Auburn Online, the instructional designers use a suite of data based student’s performance, participation, satisfaction, and engagement, as well as faculty perceptions, to inform sound learning and design principles that guide growth-mindset consultations with faculty. The consultations allow the instructional designer, along with the faculty member, to co-create an actionable course improvement plan. Auburn Online gathers learning analytics from a variety of sources that any instructor or instructional design team may have access to at their own institutions. Participation and performance data, such as page: views, assignment submissions, and aggregate grade distributions, are collected from the learning management system. Engagement data is pulled from the video hosting platform, which includes unique viewers, views and downloads, the minutes delivered, and the average duration each video is viewed. Student satisfaction is also obtained through a short survey that is embedded at the end of each instructional module. This survey is included in each course every time it is taught. The survey data is then analyzed by an instructional designer for trends and pain points in order to identify areas that can be modified, such as course content and instructional strategies, to better support student learning. This analysis, along with the instructional designer’s recommendations, is presented in a comprehensive report to instructors in an hour-long consultation where instructional designers collaborate with the faculty member on how and when to implement improvements. Auburn Online has developed a triage strategy of priority 1 or 2 level changes that will be implemented in future course iterations. This data-informed decision-making process helps instructors focus on what will best work in their teaching environment while addressing which areas need additional attention. As a student-centered process, it has created improved learning environments for students and has been well received by faculty. It has also shown to be effective in addressing the need for improvement while removing the feeling the faculty’s teaching is being personally attacked. The process that Auburn Online uses is laid out, along with the three-tier maintenance and revision guide that will be used over a three-year implementation plan. This information can help others determine what components of the maintenance and revision plan they want to utilize, as well as guide them on how to create a similar approach. The data will be used to analyze, revise, and improve courses by providing recommendations and models of good practices through determining and disseminating best practices that demonstrate an impact on student success.Keywords: data-driven, improvement, online courses, faculty development, analytics, course design
Procedia PDF Downloads 62218 Design of Smart Catheter for Vascular Applications Using Optical Fiber Sensor
Authors: Lamiek Abraham, Xinli Du, Yohan Noh, Polin Hsu, Tingting Wu, Tom Logan, Ifan Yen
Abstract:
In the field of minimally invasive, smart medical instruments such as catheters and guidewires are typically used at a remote distance to gain access to the diseased artery, often negotiating tortuous, complex, and diseased vessels in the process. Three optical fiber sensors with a diameter of 1.5mm each that are 120° apart from each other is proposed to be mounted into a catheter-based pump device with a diameter of 10mm. These sensors are configured to solve the challenges surgeons face during insertion through curvy major vessels such as the aortic arch. Moreover, these sensors deal with providing information on rubbing the walls and shape sensing. This study presents an experimental and mathematical models of the optical fiber sensors with 2 degrees of freedom. There are two eight gear-shaped tubes made up of 3D printed thermoplastic Polyurethane (TPU) material that are connected. The optical fiber sensors are mounted inside the first tube for protection from external light and used TPU material as a prototype for a catheter. The second tube is used as a flat reflection for the light intensity modulation-based optical fiber sensors. The first tube is attached to the linear guide for insertion and withdrawal purposes and can manually turn it 45° by manipulating the tube gear. A 3D hard material phantom was developed that mimics the aortic arch anatomy structure in which the test was carried out. During the insertion of the sensors into the 3D phantom, datasets are obtained in terms of voltage, distance, and position of the sensors. These datasets reflect the characteristics of light intensity modulation of the optical fiber sensors with a plane project of the aortic arch structure shape. Mathematical modeling of the light intensity was carried out based on the projection plane and experiment set-up. The performance of the system was evaluated in terms of its accuracy in navigating through the curvature and information on the position of the sensors by investigating 40 single insertions of the sensors into the 3D phantom. The experiment demonstrated that the sensors were effectively steered through the 3D phantom curvature and to desired target references in all 2 degrees of freedom. The performance of the sensors echoes the reflectance of light theory, where the smaller the radius of curvature, the more of the shining LED lights are reflected and received by the photodiode. A mathematical model results are in good agreement with the experiment result and the operation principle of the light intensity modulation of the optical fiber sensors. A prototype of a catheter using TPU material with three optical fiber sensors mounted inside has been developed that is capable of navigating through the different radius of curvature with 2 degrees of freedom. The proposed system supports operators with pre-scan data to make maneuverability and bendability through curvy major vessels easier, accurate, and safe. The mathematical modelling accurately fits the experiment result.Keywords: Intensity modulated optical fiber sensor, mathematical model, plane projection, shape sensing.
Procedia PDF Downloads 254217 Analytical and Numerical Modeling of Strongly Rotating Rarefied Gas Flows
Authors: S. Pradhan, V. Kumaran
Abstract:
Centrifugal gas separation processes effect separation by utilizing the difference in the mole fraction in a high speed rotating cylinder caused by the difference in molecular mass, and consequently the centrifugal force density. These have been widely used in isotope separation because chemical separation methods cannot be used to separate isotopes of the same chemical species. More recently, centrifugal separation has also been explored for the separation of gases such as carbon dioxide and methane. The efficiency of separation is critically dependent on the secondary flow generated due to temperature gradients at the cylinder wall or due to inserts, and it is important to formulate accurate models for this secondary flow. The widely used Onsager model for secondary flow is restricted to very long cylinders where the length is large compared to the diameter, the limit of high stratification parameter, where the gas is restricted to a thin layer near the wall of the cylinder, and it assumes that there is no mass difference in the two species while calculating the secondary flow. There are two objectives of the present analysis of the rarefied gas flow in a rotating cylinder. The first is to remove the restriction of high stratification parameter, and to generalize the solutions to low rotation speeds where the stratification parameter may be O (1), and to apply for dissimilar gases considering the difference in molecular mass of the two species. Secondly, we would like to compare the predictions with molecular simulations based on the direct simulation Monte Carlo (DSMC) method for rarefied gas flows, in order to quantify the errors resulting from the approximations at different aspect ratios, Reynolds number and stratification parameter. In this study, we have obtained analytical and numerical solutions for the secondary flows generated at the cylinder curved surface and at the end-caps due to linear wall temperature gradient and external gas inflow/outflow at the axis of the cylinder. The effect of sources of mass, momentum and energy within the flow domain are also analyzed. The results of the analytical solutions are compared with the results of DSMC simulations for three types of forcing, a wall temperature gradient, inflow/outflow of gas along the axis, and mass/momentum input due to inserts within the flow. The comparison reveals that the boundary conditions in the simulations and analysis have to be matched with care. The commonly used diffuse reflection boundary conditions at solid walls in DSMC simulations result in a non-zero slip velocity as well as a temperature slip (gas temperature at the wall is different from wall temperature). These have to be incorporated in the analysis in order to make quantitative predictions. In the case of mass/momentum/energy sources within the flow, it is necessary to ensure that the homogeneous boundary conditions are accurately satisfied in the simulations. When these precautions are taken, there is excellent agreement between analysis and simulations, to within 10 %, even when the stratification parameter is as low as 0.707, the Reynolds number is as low as 100 and the aspect ratio (length/diameter) of the cylinder is as low as 2, and the secondary flow velocity is as high as 0.2 times the maximum base flow velocity.Keywords: rotating flows, generalized onsager and carrier-Maslen model, DSMC simulations, rarefied gas flow
Procedia PDF Downloads 399216 COVID-19: Potential Effects of Nutritional Factors on Inflammation Relief
Authors: Maryam Nazari
Abstract:
COVID-19 is a respiratory disease triggered by the novel coronavirus, SARS-CoV-2, that has reached pandemic status today. Acute inflammation and immune cells infiltration into lung injuries result in multi-organ failure. The presence of other non-communicable diseases (NCDs) with systemic inflammation derived from COVID-19 may exacerbate the patient's situation and increase the risk for adverse effects and mortality. This pandemic is a novel situation and the scientific community at this time is looking for vaccines or drugs to treat the pathology. One of the biggest challenges is focused on reducing inflammation without compromising the correct immune response of the patient. In this regard, addressing the nutritional factors should not be overlooked not only as a matter of avoiding the presence of NCDs with severe infections but also as an adjunctive way to modulate the inflammatory status of the patients. Despite the pivotal role of nutrition in modifying immune response, due to the novelty of the COVID-19 disease, information about the effects of specific dietary agents is limited in this area. From the macronutrients point of view, protein deficiency (quantity or quality) has negative effects on the number of functional immunoglobulins and gut-associated lymphoid tissue (GALT). High biological value proteins or some amino acids like arginine and glutamine are well known for their ability to augment the immune system. Among lipids, fish oil has the ability to inactivate enveloped viruses, suppress pro-inflammatory prostaglandin production and block platelet-activating factors and their receptors. In addition, protectin D1, which is an Omega-3 PUFAs derivation, is a novel antiviral drug. So it seems that these fatty acids can reduce the severity and/or improve recovery of patients with COVID-19. Carbohydrates with lower glycemic index and fibers are associated with lower levels of inflammatory cytokines (CRP, TNF-α, and IL-6). Short-Chain Fatty acids not only exert a direct anti-inflammatory effect but also provide appropriate gut microbial, which is important in gastrointestinal issues related to COVID-19. From the micronutrients point of view, Vitamins A, C, D, E, iron, magnesium, zinc, selenium and copper play a vital role in the maintenance of immune function. Inadequate status in these nutrients may result in decreased resistance against COVID-19 infection. There are specific bioactive compounds in the diet that interact with the ACE2 receptor, which is the gateway for SARS and SARS-CoV-2, and thus controls the viral infection. Regarding this, the potential benefits of probiotics, resveratrol (a polyphenol found in grape), oleoylethanolamide (derived from oleic acid), and natural peroxisome proliferator-activated receptor γ agonists in foodstuffs (like curcumin, pomegranate, hot pepper) are suggested. Yet, it should be pointed out that most of these results have been reported in animal models and further human studies are needed to be verified.Keywords: Covid-19, inflammation, nutrition, dietary agents
Procedia PDF Downloads 174215 Applying Napoleoni's 'Shell-State' Concept to Jihadist Organisations's Rise in Mali, Nigeria and Syria/Iraq, 2011-2015
Authors: Francesco Saverio Angiò
Abstract:
The Islamic State of Iraq and the Levant / Syria (ISIL/S), Al-Qaeda in the Islamic Maghreb (AQIM) and People Committed to the Propagation of the Prophet's Teachings and Jihad, also known as ‘Boko Haram’ (BH), have fought successfully against Syria and Iraq, Mali, Nigeria’s government, respectively. According to Napoleoni, the ‘shell-state’ concept can explain the economic dimension and the financing model of the ISIL insurgency. However, she argues that AQIM and BH did not properly plan their financial model. Consequently, her idea would not be suitable to these groups. Nevertheless, AQIM and BH’s economic performances and their (short) territorialisation suggest that their financing models respond to a well-defined strategy, which they were able to adapt to new circumstances. Therefore, Napoleoni’s idea of ‘shell-state’ can be applied to the three jihadist armed groups. In the last five years, together with other similar entities, ISIL/S, AQIM and BH have been fighting against governments with insurgent tactics and terrorism acts, conquering and ruling a quasi-state; a physical space they presented as legitimate territorial entity, thanks to a puritan version of the Islamic law. In these territories, they have exploited the traditional local economic networks. In addition, they have contributed to the development of legal and illegal transnational business activities. They have also established a justice system and created an administrative structure to supply services. Napoleoni’s ‘shell-state’ can describe the evolution of ISIL/S, AQIM and BH, which has switched from an insurgency to a proto or a quasi-state entity, enjoying a significant share of power over territories and populations. Napoleoni first developed and applied the ‘Shell-state’ concept to describe the nature of groups such as the Palestine Liberation Organisation (PLO), before using it to explain the expansion of ISIL. However, her original conceptualisation emphasises on the economic dimension of the rise of the insurgency, focusing on the ‘business’ model and the insurgents’ financing management skills, which permits them to turn into an organisation. However, the idea of groups which use, coordinate and grab some territorial economic activities (at the same time, encouraging new criminal ones), can also be applied to administrative, social, infrastructural, legal and military levels of their insurgency, since they contribute to transform the insurgency to the same extent the economic dimension does. In addition, according to Napoleoni’s view, the ‘shell-state’ prism is valid to understand the ISIL/S phenomenon, because the group has carefully planned their financial steps. Napoleoni affirmed that ISIL/S carries out activities in order to promote their conversion from a group relying on external sponsors to an entity that can penetrate and condition local economies. On the contrary, ‘shell-state’ could not be applied to AQIM or BH, which are acting more like smugglers. Nevertheless, despite its failure to control territories, as ISIL has been able to do, AQIM and BH have responded strategically to their economic circumstances and have defined specific dynamics to ensure a flow of stable funds. Therefore, Napoleoni’s theory is applicable.Keywords: shell-state, jihadist insurgency, proto or quasi-state entity economic planning, strategic financing
Procedia PDF Downloads 353214 Play, Practice and Perform: The Pathway to Becoming and Belonging as an Engineer
Authors: Rick Evans
Abstract:
Despite over 40 years of research into why women choose not to enroll or leave undergraduate engineering programs, along with the subsequent and serious efforts to attract more women, women receiving bachelor's degrees in engineering in the US have remained disappointingly low. We know that even despite their struggles to become more welcoming and inclusive, engineering programs remain gendered, raced and classed. However, our research team has found that women who participate and indeed thrive in undergraduate engineering project teams do so in numbers that far exceed their participation in undergraduate programs. We believe part of the answer lies in the ways that project teams facilitate experiential learning, specifically providing opportunities for members to play, practice and perform. We employ a multi-case study method and assume a feminist, activist and interpretive perspective. We seek to generate concrete and context-dependent knowledge in order to explore potentially new variables and hypotheses. Our focus is to learn from those select women who are thriving. For this oral or e-poster presentation, we will focus on the results of the second of our semi-structured interviews – the learning journey interview. During this interview, we ask participants to tell us the story/ies of their participation in project teams. Our results suggest these women find joy in their experience of developing and applying engineering expertise. They experience this joy and develop their expertise in the highly patterned progression of play, practice and performance. Play is a purposeful activity in which someone enters an imaginary world, a world not yet real to them. However, this imaginary world is still very much connected to the real world, in this case, a particular kind of engineering, in that the ways of engaging are already established, codified and rule-governed. As such, these women are novices motivated to join a community of actors. Practice, better understood as practices, a count noun, is an embodied, materially interconnected collection of actions organized around the shared understandings of that community of actors. Those shared understandings reveal a social order – a particular field of engineering. No longer novices, these women begin to develop and display their emergent identities as engineers. Perform is activity meant either to demonstrate competence and/or to enable, even teach play and practice to others. As performers, these women participants become models for others. They direct play and practice, contextualizing both within a field of engineering and the specific aims of the project team community. By playing, practicing and performing engineering, women claim their identities as engineers and, equally important, have those identities acknowledged by team members. If we hope to transform our gendered, raced, classed institutions, we need to learn more about women who thrive within those institutions. We need to learn more about their processes of becoming and belonging as engineers. Our research presentation begins with a description of project teams and our multi-case study method. We then offer detailed descriptions of play, practice, and performance using the voices of women in project teams.Keywords: engineering education, gender, identity, project teams
Procedia PDF Downloads 124213 Drivers of the Performance of Members of a Social Incubator Considering the Values of Work: A Qualitative Study with Social Entrepreneurs
Authors: Leticia Lengler, Vania Estivalete, Vivian Flores Costa, Tais De Andrade, Lisiane Fellini Faller
Abstract:
Social entrepreneurship has emerged and driven a new development perspective, and as the literature mentions, it is based on innovation, and mainly, on the creation of social value, rather than personal wealth and shareholders. In this field of study, one of the focuses of discussion refers to the distinct characteristics of the individuals responsible for socially directed initiatives, named as social entrepreneurs. To contribute to this perspective, the present study aims to identify the values related to work that guide the performance of social entrepreneurs, members of enterprises that have developed themselves within a social incubator at a federal institution of higher education in Brazil. Each person's value system is present in different facets of his life, manifesting himself in his choices and in the way he conducts the relationship with other people in society. Especially the values of work, the focus of this research, play a significant role in organizational studies, since they are considered one of the important guiding principles of the behavior of individuals in the work environment. Regarding the method of the study, a descriptive and qualitative research was carried out. In the data collection, 24 entrepreneurs, members of five different enterprises belonging to the social incubator, were interviewed. The research instrument consisted of three open questions, which could be answered with the support of a "disc of values", an artifact organized to clearly demonstrate the values of the work to the respondents. The analysis of the interviews took into account the categories defined a priori, based on the model proposed by previous authors who validated these constructs within their research contexts, contemplating the following dimensions: Self-determination and stimulation; Safety; Conformity; Universalism and benevolence; Achievement; and Power. It should be noted that, in order to provide a better understanding of the interviewees, in the "disc of values" used in the research, these dimensions were represented by the objectives that define them, being respectively: Challenge; Financial independence; Commitment; Welfare of others; Personal success; And Power. Some preliminary results show that, as guiding principles of the investigation, priority is given to work values related to Self-determination and stimulation, Conformity and Universalism and benevolence. Such findings point to the importance given by these individuals to independent thinking and acting, as well as to novelty and constant challenge. Still, they demonstrate the appreciation of commitment to their enterprise, the people who make it and the quality of their work. They also point to the relevance of the possibility of contributing to the greater social good, that is, of the search for the well-being of close people and of society, as it is implied in models of social entrepreneurship coming from literature. With a lower degree of priority, the values denominated Safety and Realization, as the financial question at work and the search for satisfaction and personal success, through the use of socially recognized skills were mentioned aspects with little emphasis by social entrepreneurs. The Power value was not considered as guiding principle of the work for the respondents.Keywords: qualitative study, social entrepreneur, social incubator, values of work
Procedia PDF Downloads 261212 Online Monitoring and Control of Continuous Mechanosynthesis by UV-Vis Spectrophotometry
Authors: Darren A. Whitaker, Dan Palmer, Jens Wesholowski, James Flaherty, John Mack, Ahmad B. Albadarin, Gavin Walker
Abstract:
Traditional mechanosynthesis has been performed by either ball milling or manual grinding. However, neither of these techniques allow the easy application of process control. The temperature may change unpredictably due to friction in the process. Hence the amount of energy transferred to the reactants is intrinsically non-uniform. Recently, it has been shown that the use of Twin-Screw extrusion (TSE) can overcome these limitations. Additionally, TSE enables a platform for continuous synthesis or manufacturing as it is an open-ended process, with feedstocks at one end and product at the other. Several materials including metal-organic frameworks (MOFs), co-crystals and small organic molecules have been produced mechanochemically using TSE. The described advantages of TSE are offset by drawbacks such as increased process complexity (a large number of process parameters) and variation in feedstock flow impacting on product quality. To handle the above-mentioned drawbacks, this study utilizes UV-Vis spectrophotometry (InSpectroX, ColVisTec) as an online tool to gain real-time information about the quality of the product. Additionally, this is combined with real-time process information in an Advanced Process Control system (PharmaMV, Perceptive Engineering) allowing full supervision and control of the TSE process. Further, by characterizing the dynamic behavior of the TSE, a model predictive controller (MPC) can be employed to ensure the process remains under control when perturbed by external disturbances. Two reactions were studied; a Knoevenagel condensation reaction of barbituric acid and vanillin and, the direct amidation of hydroquinone by ammonium acetate to form N-Acetyl-para-aminophenol (APAP) commonly known as paracetamol. Both reactions could be carried out continuously using TSE, nuclear magnetic resonance (NMR) spectroscopy was used to confirm the percentage conversion of starting materials to product. This information was used to construct partial least squares (PLS) calibration models within the PharmaMV development system, which relates the percent conversion to product to the acquired UV-Vis spectrum. Once this was complete, the model was deployed within the PharmaMV Real-Time System to carry out automated optimization experiments to maximize the percentage conversion based on a set of process parameters in a design of experiments (DoE) style methodology. With the optimum set of process parameters established, a series of PRBS process response tests (i.e. Pseudo-Random Binary Sequences) around the optimum were conducted. The resultant dataset was used to build a statistical model and associated MPC. The controller maximizes product quality whilst ensuring the process remains at the optimum even as disturbances such as raw material variability are introduced into the system. To summarize, a combination of online spectral monitoring and advanced process control was used to develop a robust system for optimization and control of two TSE based mechanosynthetic processes.Keywords: continuous synthesis, pharmaceutical, spectroscopy, advanced process control
Procedia PDF Downloads 179211 Clinical Validation of an Automated Natural Language Processing Algorithm for Finding COVID-19 Symptoms and Complications in Patient Notes
Authors: Karolina Wieczorek, Sophie Wiliams
Abstract:
Introduction: Patient data is often collected in Electronic Health Record Systems (EHR) for purposes such as providing care as well as reporting data. This information can be re-used to validate data models in clinical trials or in epidemiological studies. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. Mentioning a disease in a discharge letter does not necessarily mean that a patient suffers from this disease. Many of them discuss a diagnostic process, different tests, or discuss whether a patient has a certain disease. The COVID-19 dataset in this study used natural language processing (NLP), an automated algorithm which extracts information related to COVID-19 symptoms, complications, and medications prescribed within the hospital. Free-text patient clinical patient notes are rich sources of information which contain patient data not captured in a structured form, hence the use of named entity recognition (NER) to capture additional information. Methods: Patient data (discharge summary letters) were exported and screened by an algorithm to pick up relevant terms related to COVID-19. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. A list of 124 Systematized Nomenclature of Medicine (SNOMED) Clinical Terms has been provided in Excel with corresponding IDs. Two independent medical student researchers were provided with a dictionary of SNOMED list of terms to refer to when screening the notes. They worked on two separate datasets called "A” and "B”, respectively. Notes were screened to check if the correct term had been picked-up by the algorithm to ensure that negated terms were not picked up. Results: Its implementation in the hospital began on March 31, 2020, and the first EHR-derived extract was generated for use in an audit study on June 04, 2020. The dataset has contributed to large, priority clinical trials (including International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) by bulk upload to REDcap research databases) and local research and audit studies. Successful sharing of EHR-extracted datasets requires communicating the provenance and quality, including completeness and accuracy of this data. The results of the validation of the algorithm were the following: precision (0.907), recall (0.416), and F-score test (0.570). Percentage enhancement with NLP extracted terms compared to regular data extraction alone was low (0.3%) for relatively well-documented data such as previous medical history but higher (16.6%, 29.53%, 30.3%, 45.1%) for complications, presenting illness, chronic procedures, acute procedures respectively. Conclusions: This automated NLP algorithm is shown to be useful in facilitating patient data analysis and has the potential to be used in more large-scale clinical trials to assess potential study exclusion criteria for participants in the development of vaccines.Keywords: automated, algorithm, NLP, COVID-19
Procedia PDF Downloads 102210 Participatory Action Research with Social Workers: The World Café Method to Share Critical Reflections and Possible Solutions on Working Practices in Migration Contexts
Authors: Ilaria Coppola, Davide Lacqua, Nadia Ranìa
Abstract:
Over the past two decades, migration has gained central importance in the global landscape. Europe hosts the largest number of migrants, totaling 92.9 million people, approximately 37.4 million of whom are regular residents within the European Union's borders. Reception services and different modes of management have received increasing attention precisely because of the complexity of the phenomenon, which necessarily impacts the wider community. Indeed, opening a reception center in an area entails major challenges for that context, for the community that inhabits it, and for the people who use that service. Questioning the strategies needed to offer a functional reception service means listening to the different actors involved who daily face the difficulties involved in working in the field. Recognizing the importance of the professional figures who work closely with migrant people, each with their own specific experiences has led researchers to study and analyze the different types of reception centers and their management. This has led to the development of intervention models and best practices in various countries. However, research from this perspective is still limited, especially in Italy. From this theoretical framework, this study aims to bring out an innovative qualitative tool, such as the world café, the work experiences of 29 social workers working in shelters in the Italian context. Most of the participants were female and lived in the Northwest regions of Italy. Through this tool, the aim was to bring out and share reflections on the critical issues encountered in working in reception centers, with a view to identifying possible solutions for better management of services. The World café represents a tool used in participatory action research that promotes dialogue among participants through the sharing of reflections and ideas. In fact, from critical reflections, participants are invited to identify and share possible solutions to provide a more functional service with benefits to the entire community. Therefore, this research, through the innovative technique of the World café, aims to promote critical thinking processes that can help participants find solutions that can be introduced into their work contexts or proposed to decision-makers. Specifically, the findings shed light on several issues, including complex bureaucratic procedures, insufficient project planning, and inefficiencies in the services provided to migrants. These concerns collectively contribute to what participants perceive as a disorganized and uncoordinated system. In addition, the study explores potential solutions that promote more efficient networking practices, coordinated project management, and a more positive approach to cultural diversity. The main results obtained will be discussed with a focus on critical reflections and possible solutions identified.Keywords: participatory action research, world café method, reception services, migration contexts, social workers, Italy
Procedia PDF Downloads 67209 Preventative Programs for At-Risk Families of Child Maltreatment: Using Home Visiting and Intergenerational Relationships
Authors: Kristina Gordon
Abstract:
One in three children in the United States is a victim of a maltreatment investigation, and about one in nine children has a substantiated investigation. Home visiting is one of several preventative strategies rooted in an early childhood approach that fosters maternal, infant, and early childhood health, protection, and growth. In the United States, 88% of states report administering home visiting programs or state-designed models. The purpose of this study was to conduct a systematic review on home visiting programs in the United States focused on the prevention of child abuse and neglect. This systematic review included 17 articles which found that most of the studies reported optimistic results. Common across studies was program content related to (1) typical child development, (2) parenting education, and (3) child physical health. Although several factors common to home visiting and parenting interventions have been identified, no research has examined the common components of manualized home visiting programs to prevent child maltreatment. Child maltreatment can be addressed with home visiting programs with evidence-based components and cultural adaptations that increase prevention by assisting families in tackling the risk factors they face. An innovative approach to child maltreatment prevention is bringing together at-risk families with the aging community. This innovative approach was prompted due to existing home visitation programs only focusing on improving skillsets and providing temporary relationships. This innovative approach can provide the opportunity for families to build a relationship with an aging individual who can share their wisdom, skills, compassion, love, and guidance, to support families in their well-being and decrease child maltreatment occurrence. Families would be identified if they experience any of the risk factors, including parental substance abuse, parental mental illness, domestic violence, and poverty. Families would also be identified as at risk if they lack supportive relationships such as grandparents or relatives. Families would be referred by local agencies such as medical clinics, hospitals, schools, etc., that have interactions with families regularly. The aging community would be recruited at local housing communities and community centers. An aging individual would be identified by the elderly community when there is a need or interest in a relationship by or for the individual. Cultural considerations would be made when assessing for compatibility between the families and aging individuals. The pilot program will consist of a small group of participants to allow manageable results to evaluate the efficacy of the program. The pilot will include pre-and post-surveys to evaluate the impact of the program. From the results, data would be created to determine the efficacy as well as the sufficiency of the details of the pilot. The pilot would also be evaluated on whether families were referred to Child Protective Services during the pilot as it relates to the goal of decreasing child maltreatment. The ideal findings will display a decrease in child maltreatment and an increase in family well-being for participants.Keywords: child maltreatment, home visiting, neglect, preventative, abuse
Procedia PDF Downloads 117208 Women's Entrepreneurship in Mena Region: Gem Key Learnings
Authors: Fatima Boutaleb
Abstract:
Entrepreneurship proves to be crucial for the economic growth and development, since it contributes to job creation and the improvement of the overall productivity thus generating a positive impact upon society at various levels. Promoting entrepreneurship stimulates therefore economic diversity that is key to the betterment and/or maintaining of the standard of living. In fact, recent research suggests that entrepreneurship contributes to development by creating businesses and jobs, stimulating innovation, creating social capital across borders, and channeling political and financial capital. However, different research studies indicate that among the main factors impeding the entrepreneurship are politico-economic as socio-cultural problems, with an intensity for those related to young people and to women. In the MENA region, discrimination inherent in gender is alarming: Only one woman in eight runs her own business against 1 in 3 men. In most countries, young women and young men are facing problems involving access to finance, inadequate infrastructure, lack of support and, in general, an ecosystem that is rather unfavorable. According to the International Labor Organization, North Africa and the Middle East has the highest unemployment rate in all other regions of the world. In other hand, nearly a quarter of the population under 30 is unemployed and youth unemployment costs more than $40 billion each year to the region. In the current context, the situations in the Middle East and North Africa region are singular, both in terms of demographic trends and socio-economic issues around the employment of a large and better trained youth, but still strongly affected by unemployment and under-employment. According to a study published in 2015 by McKinsey, the world gain 26% of additional GDP (47% in the MENA region), more than 28 trillion dollars by 2025, if women came to participate, as well as men, to the economy. Promoting entrepreneurship represents an excellent alternative for the countries whose productive fabric fails to integrate the contingent of young people entering the job market each year. The MENA region, presenting entrepreneurial activity rates below those of other regions in terms of comparable development, has undoubtedly leeway at this level, even though the region displays large national heterogeneity, namely in the priority given to the promotion of entrepreneurship. The objective of this article is therefore to examine the women entrepreneurial vocation in the MENA region, to see to what extent research on the determinant of gender can provide information on the trend of the emerging entrepreneurial activity whether driven by necessity or by opportunity and, on this basis, to submit public policy proposals for the improvement of the mechanisms of inclusion among the youth women people. The objective is not to analyze the causality models but rather to identify the entrepreneurial construct specific to the MENA region via the analysis of GEM data from 2017 to 2019 among adults belonging to 10 countries of the MENA region. Notably, the study shows that inclusion of young women may be enhanced. These disadvantaged segments frequently intend to become entrepreneurs, but they tend not to enact their vocational intentions.Keywords: economic development, entrepreneurial activity, GEM, gender, informal sector
Procedia PDF Downloads 102207 The Academic Experience of Vocational Training Teachers
Authors: Andréanne Gagné, Jo Anni Joncas, Éric Tendon
Abstract:
Teaching in vocational training requires an excellent mastery of the trade being taught, but also solid professional skills in pedagogy. Teachers are typically recruited on the basis of their trade expertise, and they do not necessarily have training or experience in pedagogy. In order to counter this lack, the Ministry of Education (Québec, Canada) requires them to complete a 120-credit university program to obtain their teaching certificate. They must complete this training in addition to their teaching duties. This training was rarely planned in the teacher’s life course, and each teacher approaches it differently: some are enthusiastic, but many feel reluctant discouragement and even frustration at the idea of committing to a training program lasting an average of 10 years to completion. However, Quebec is experiencing an unprecedented shortage of teachers, and the perseverance of vocational teachers in their careers requires special attention because of the conditions of their specific integration conditions. Our research examines the perceptions that vocational teachers in training have of their academic experience in pre-service teaching. It differs from previous research in that it focuses on the influence of the academic experience on the teaching employment experience. The goal is that by better understanding the university experience of teachers in vocational education, we can identify support strategies to support their school experience and their teaching. To do this, the research is based on the theoretical framework of the sociology of experience, which allows us to study the way in which these “teachers-students” give meaning to their university program in articulation with their jobs according to three logics of action. The logic of integration is based on the process of socialization, where the action is preceded by the internalization of values, norms, and cultural models associated with the training context. The logic of strategy refers to the usefulness of this experience where the individual constructs a form of rationality according to his objectives, resources, social position, and situational constraints. The logic of subjectivation refers to reflexivity activities aimed at solving problems and making choices. These logics served as a framework for the development of an online questionnaire. Three hundred respondents, newly enrolled in an undergraduate teaching program (bachelor's degree in vocational education), expressed themselves about their academic experience. This paper relates qualitative data (open-ended questions) subjected to an interpretive repertory analysis approach to descriptive data (closed-ended questions) that emerged. The results shed light on how the respondents perceive themselves as teachers and students, their perceptions of university training and the support offered, and the place that training occupies in their professional path. Indeed, their professional and academic paths are inextricably linked, and it seems essential to take them into account simultaneously to better meet their needs and foster the development of their expertise in pedagogy. The discussion focuses on the strengths and limitations of university training from the perspective of the logic of action. The results also suggest support strategies that can be implemented to better support the integration and retention of student teachers in professional education.Keywords: teacher, vocational training, pre-service training, academic experience
Procedia PDF Downloads 115206 Developing Thai-UK Double Degree Programmes: An Exploratory Study Identifying Challenges, Competing Interests and Risks
Abstract:
In Thailand, a 4.0 policy has been initiated that is designed to prepare and train an appropriate workforce to support the move to a value-based economy. One aspect of support for this policy is a project to encourage the creation of double degree programmes, specifically between Thai and UK universities. This research into the project, conducted with its key players, explores the factors that can either enable or hinder the development of such programmes. It is an area that has received little research attention to date. Key findings focus on differences in quality assurance requirements, attitudes to benefits, risks, and committed levels of institutional support, thus providing valuable input into future policy making. The Transnational Education (TNE) Development Project was initiated in 2015 by the British Council, in conjunction with the Office for Higher Education Commission (OHEC), Thailand. The purpose of the project was to facilitate opportunities for Thai Universities to partner with UK Universities so as to develop double degree programme models. In this arrangement, the student gains both a UK and a Thai qualification, spending time studying in both countries. Twenty-two partnerships were initiated via the project. Utilizing a qualitative approach, data sources included participation in TNE project workshops, peer reviews, and over 20 semi-structured interviews conducted with key informants within the participating UK and Thai universities. Interviews were recorded, transcribed, and analysed for key themes. The research has revealed that the strength of the relationship between the two partner institutions is critical. Successful partnerships are often built on previous personal contact, have senior-level involvement and are strengthened by partnership on different levels, such as research, student exchange, and other forms of mobility. The support of the British Council was regarded as a key enabler in developing these types of projects for those universities that had not been involved in TNE previously. The involvement of industry is apparent in programmes that have high scientific content but not well developed in other subject areas. Factors that hinder the development of partnership programmes include the approval processes and quality requirements of each institution. Significant differences in fee levels between Thai and UK universities provide a challenge and attempts to bridge them require goodwill on the part of the latter that may be difficult to realise. This research indicates the key factors to which attention needs to be given when developing a TNE programme. Early attention to these factors can reduce the likelihood that the partnership will fail to develop. Representatives in both partner universities need to understand their respective processes of development and approval. The research has important practical implications for policy-makers and planners involved with TNE, not only in relation to the specific TNE project but also more widely in relation to the development of TNE programmes in other countries and other subject areas. Future research will focus on assessing the success of the double degree programmes generated by the TNE Development Project from the perspective of universities, policy makers, and industry partners.Keywords: double-degree, internationalization, partnerships, Thai-UK
Procedia PDF Downloads 103205 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning
Authors: Ali Kazemi
Abstract:
The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis
Procedia PDF Downloads 59204 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation
Authors: Matthias Leitner, Gernot Pottlacher
Abstract:
Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion
Procedia PDF Downloads 220203 Neighborhood-Scape as a Methodology for Enhancing Gulf Region Cities' Quality of Life: Case of Doha, Qatar
Authors: Eman AbdelSabour
Abstract:
Sustainability is increasingly being considered as a critical aspect in shaping the urban environment. It works as an invention development basis for global urban growth. Currently, different models and structures impact the means of interpreting the criteria that would be included in defining a sustainable city. There is a collective need to improve the growth path to an extremely durable path by presenting different suggestions regarding multi-scale initiatives. The global rise in urbanization has led to increased demand and pressure for better urban planning choice and scenarios for a better sustainable urban alternative. The need for an assessment tool at the urban scale was prompted due to the trend of developing increasingly sustainable urban development (SUD). The neighborhood scale is being managed by a growing research committee since it seems to be a pertinent scale through which economic, environmental, and social impacts could be addressed. Although neighborhood design is a comparatively old practice, it is in the initial years of the 21st century when environmentalists and planners started developing sustainable assessment at the neighborhood level. Through this, urban reality can be considered at a larger scale whereby themes which are beyond the size of a single building can be addressed, while it still stays small enough that concrete measures could be analyzed. The neighborhood assessment tool has a crucial role in helping neighborhood sustainability to perform approach and fulfill objectives through a set of themes and criteria. These devices are also known as neighborhood assessment tool, district assessment tool, and sustainable community rating tool. The primary focus of research has been on sustainability from the economic and environmental aspect, whereas the social, cultural issue is rarely focused. Therefore, this research is based on Doha, Qatar, the current urban conditions of the neighborhoods is discussed in this study. The research problem focuses on the spatial features in relation to the socio-cultural aspects. This study is outlined in three parts; the first section comprises of review of the latest use of wellbeing assessment methods to enhance decision process of retrofitting physical features of the neighborhood. The second section discusses the urban settlement development, regulations and the process of decision-making rule. An analysis of urban development policy with reference to neighborhood development is also discussed in this section. Moreover, it includes a historical review of the urban growth of the neighborhoods as an atom of the city system present in Doha. Last part involves developing quantified indicators regarding subjective well-being through a participatory approach. Additionally, applying GIS will be utilized as a visualizing tool for the apparent Quality of Life (QoL) that need to develop in the neighborhood area as an assessment approach. Envisaging the present QoL situation in Doha neighborhoods is a process to improve current condition neighborhood function involves many days to day activities of the residents, due to which areas are considered dynamic.Keywords: neighborhood, subjective wellbeing, decision support tools, Doha, retrofiring
Procedia PDF Downloads 138202 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 82201 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters
Authors: Jyoti Sahu, Vinay A. Juvekar
Abstract:
Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature
Procedia PDF Downloads 393200 Promoting Compassionate Communication in a Multidisciplinary Fellowship: Results from a Pilot Evaluation
Authors: Evonne Kaplan-Liss, Val Lantz-Gefroh
Abstract:
Arts and humanities are often incorporated into medical education to help deepen understanding of the human condition and the ability to communicate from a place of compassion. However, a gap remains in our knowledge of compassionate communication training for postgraduate medical professionals (as opposed to students and residents); how training opportunities include and impact the artists themselves, and how train-the-trainer models can support learners to become teachers. In this report, the authors present results from a pilot evaluation of the UC San Diego Health: Sanford Compassionate Communication Fellowship, a 60-hour experiential program that uses theater, narrative reflection, poetry, literature, and journalism techniques to train a multidisciplinary cohort of medical professionals and artists in compassionate communication. In the culminating project, fellows design and implement their own projects as teachers of compassionate communication in their respective workplaces. Qualitative methods, including field notes and 30-minute Zoom interviews with each fellow, were used to evaluate the impact of the fellowship. The cohort included both artists (n=2) and physicians representing a range of specialties (n=7), such as occupational medicine, palliative care, and pediatrics. The authors coded the data using thematic analysis for evidence of how the multidisciplinary nature of the fellowship impacted the fellows’ experiences. The findings show that the multidisciplinary cohort contributed to a greater appreciation of compassionate communication in general. Fellows expressed that the ability to witness how those in different fields approached compassionate communication enhanced their learning and helped them see how compassion can be expressed in various contexts, which was both “exhilarating” and “humbling.” One physician expressed that the fellowship has been “really helpful to broaden my perspective on the value of good communication.” Fellows shared how what they learned in the fellowship translated to increased compassionate communication, not only in their professional roles but in their personal lives as well. A second finding was the development of a supportive community. Because each fellow brought their own experiences and expertise, there was a sense of genuine ability to contribute as well as a desire to learn from others. A “brave space” was created by the fellowship facilitators and the inclusion of arts-based activities: a space that invited vulnerability and welcomed fellows to make their own meaning without prescribing any one answer or right way to approach compassionate communication. This brave space contributed to a strong connection among the fellows and reports of increased well-being, as well as multiple collaborations post-fellowship to carry forward compassionate communication training at their places of work. Results show initial evidence of the value of a multidisciplinary fellowship for promoting compassionate communication for both artists and physicians. The next steps include maintaining the supportive fellowship community and collaborations with a post-fellowship affiliate faculty program; scaling up the fellowship with non-physicians (e.g., nurses and physician assistants); and collecting data from family members, colleagues, and patients to understand how the fellowship may be creating a ripple effect outside of the fellowship through fellows’ compassionate communication.Keywords: compassionate communication, communication in healthcare, multidisciplinary learning, arts in medicine
Procedia PDF Downloads 70199 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer
Authors: Binder Hans
Abstract:
Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas
Procedia PDF Downloads 149198 Mapping the State of the Art of European Companies Doing Social Business at the Base of the Economic Pyramid as an Advanced Form of Strategic Corporate Social Responsibility
Authors: Claudio Di Benedetto, Irene Bengo
Abstract:
The objective of the paper is to study how large European companies develop social business (SB) at the base of the economic pyramid (BoP). BoP markets are defined as the four billions people living with an annual income below $3,260 in local purchasing power. Despite they are heterogeneous in terms of geographic range they present some common characteristics: the presence of significant unmet (social) needs, high level of informal economy and the so-called ‘poverty penalty’. As a result, most people living at BoP are excluded from the value created by the global market economy. But it is worth noting, that BoP population with an aggregate purchasing power of around $5 trillion a year, represent a huge opportunity for companies that want to enhance their long-term profitability perspective. We suggest that in this context, the development of SB is, for companies, an innovative and promising way to satisfy unmet social needs and to experience new forms of value creation. Indeed, SB can be considered a strategic model to develop CSR programs that fully integrate the social dimension into the business to create economic and social value simultaneously. Despite in literature many studies have been conducted on social business, only few have explicitly analyzed such phenomenon from a company perspective and their role in the development of such initiatives remains understudied with fragmented results. To fill this gap the paper analyzes the key characteristics of the social business initiatives developed by European companies at BoP. The study was performed analyzing 1475 European companies participating in the United Nation Global Compact, the world’s leading corporate social responsibility program. Through the analysis of the corporate websites the study identifies companies that actually do SB at BoP. For SB initiatives identified, information were collected according to a framework adapted from the SB model developed by preliminary results show that more than one hundred European companies have already implemented social businesses at BoP accounting for the 6,5% of the total. This percentage increases to 15% if the focus is on companies with more than 10.440 employees. In terms of geographic distribution 80% of companies doing SB at BoP are located in western and southern Europe. The companies more active in promoting SB belong to financial sector (20%), energy sector (17%) and food and beverage sector (12%). In terms of social needs addressed almost 30% of the companies develop SB to provide access to energy and WASH, 25% of companies develop SB to reduce local unemployment or to promote local entrepreneurship and 21% of companies develop SB to promote financial inclusion of poor. In developing SB companies implement different social business configurations ranging from forms of outsourcing to internal development models. The study identifies seven main configurations through which company develops social business and each configuration present distinguishing characteristics respect to the involvement of the company in the management, the resources provided and the benefits achieved. By performing different analysis on data collected the paper provides detailed insights on how European companies develop SB at BoP.Keywords: base of the economic pyramid, corporate social responsibility, social business, social enterprise
Procedia PDF Downloads 227197 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes
Authors: Madushani Rodrigo, Banuka Athuraliya
Abstract:
In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16
Procedia PDF Downloads 124196 Transport of Inertial Finite-Size Floating Plastic Pollution by Ocean Surface Waves
Authors: Ross Calvert, Colin Whittaker, Alison Raby, Alistair G. L. Borthwick, Ton S. van den Bremer
Abstract:
Large concentrations of plastic have polluted the seas in the last half century, with harmful effects on marine wildlife and potentially to human health. Plastic pollution will have lasting effects because it is expected to take hundreds or thousands of years for plastic to decay in the ocean. The question arises how waves transport plastic in the ocean. The predominant motion induced by waves creates ellipsoid orbits. However, these orbits do not close, resulting in a drift. This is defined as Stokes drift. If a particle is infinitesimally small and the same density as water, it will behave exactly as the water does, i.e., as a purely Lagrangian tracer. However, as the particle grows in size or changes density, it will behave differently. The particle will then have its own inertia, the fluid will exert drag on the particle, because there is relative velocity, and it will rise or sink depending on the density and whether it is on the free surface. Previously, plastic pollution has all been considered to be purely Lagrangian. However, the steepness of waves in the ocean is small, normally about α = k₀a = 0.1 (where k₀ is the wavenumber and a is the wave amplitude), this means that the mean drift flows are of the order of ten times smaller than the oscillatory velocities (Stokes drift is proportional to steepness squared, whilst the oscillatory velocities are proportional to the steepness). Thus, the particle motion must have the forces of the full motion, oscillatory and mean flow, as well as a dynamic buoyancy term to account for the free surface, to determine whether inertia is important. To track the motion of a floating inertial particle under wave action requires the fluid velocities, which form the forcing, and the full equations of motion of a particle to be solved. Starting with the equation of motion of a sphere in unsteady flow with viscous drag. Terms can added then be added to the equation of motion to better model floating plastic: a dynamic buoyancy to model a particle floating on the free surface, quadratic drag for larger particles and a slope sliding term. Using perturbation methods to order the equation of motion into sequentially solvable parts allows a parametric equation for the transport of inertial finite-sized floating particles to be derived. This parametric equation can then be validated using numerical simulations of the equation of motion and flume experiments. This paper presents a parametric equation for the transport of inertial floating finite-size particles by ocean waves. The equation shows an increase in Stokes drift for larger, less dense particles. The equation has been validated using numerical solutions of the equation of motion and laboratory flume experiments. The difference in the particle transport equation and a purely Lagrangian tracer is illustrated using worlds maps of the induced transport. This parametric transport equation would allow ocean-scale numerical models to include inertial effects of floating plastic when predicting or tracing the transport of pollutants.Keywords: perturbation methods, plastic pollution transport, Stokes drift, wave flume experiments, wave-induced mean flow
Procedia PDF Downloads 121195 An Exploratory Factor and Cluster Analysis of the Willingness to Pay for Last Mile Delivery
Authors: Maximilian Engelhardt, Stephan Seeck
Abstract:
The COVID-19 pandemic is accelerating the already growing field of e-commerce. The resulting urban freight transport volume leads to traffic and negative environmental impact. Furthermore, the service level of parcel logistics service provider is lacking far behind the expectations of consumer. These challenges can be solved by radically reorganize the urban last mile distribution structure: parcels could be consolidated in a micro hub within the inner city and delivered within time windows by cargo bike. This approach leads to a significant improvement of consumer satisfaction with their overall delivery experience. However, this approach also leads to significantly increased costs per parcel. While there is a relevant share of online shoppers that are willing to pay for such a delivery service there are no deeper insights about this target group available in the literature. Being aware of the importance of knowing target groups for businesses, the aim of this paper is to elaborate the most important factors that determine the willingness to pay for sustainable and service-oriented parcel delivery (factor analysis) and to derive customer segments (cluster analysis). In order to answer those questions, a data set is analyzed using quantitative methods of multivariate statistics. The data set was generated via an online survey in September and October 2020 within the five largest cities in Germany (n = 1.071). The data set contains socio-demographic, living-related and value-related variables, e.g. age, income, city, living situation and willingness to pay. In a prior work of the author, the data was analyzed applying descriptive and inference statistical methods that only provided limited insights regarding the above-mentioned research questions. The analysis in an exploratory way using factor and cluster analysis promise deeper insights of relevant influencing factors and segments for user behavior of the mentioned parcel delivery concept. The analysis model is built and implemented with help of the statistical software language R. The data analysis is currently performed and will be completed in December 2021. It is expected that the results will show the most relevant factors that are determining user behavior of sustainable and service-oriented parcel deliveries (e.g. age, current service experience, willingness to pay) and give deeper insights in characteristics that describe the segments that are more or less willing to pay for a better parcel delivery service. Based on the expected results, relevant implications and conclusions can be derived for startups that are about to change the way parcels are delivered: more customer-orientated by time window-delivery and parcel consolidation, more environmental-friendly by cargo bike. The results will give detailed insights regarding their target groups of parcel recipients. Further research can be conducted by exploring alternative revenue models (beyond the parcel recipient) that could compensate the additional costs, e.g. online-shops that increase their service-level or municipalities that reduce traffic on their streets.Keywords: customer segmentation, e-commerce, last mile delivery, parcel service, urban logistics, willingness-to-pay
Procedia PDF Downloads 108194 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data
Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito
Abstract:
Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement
Procedia PDF Downloads 390