Search results for: macroeconomic uncertainty
383 The Domino Principle of Dobbs v Jackson Women’s Health Organization: The Gays Are Next!
Authors: Alan Berman, Mark Brady
Abstract:
The phenomenon of homophobia and transphobia in the United States detrimentally impacts the health, wellbeing, and dignity of school students who identify with the LGBTQ+ community. These negative impacts also compromise the participation of LGBTQ+ individuals in the wider life of educational domains and endanger the potential economic, social and cultural contribution this community can make to American society. The recent 6:3 majority decision of the US Supreme Court in Dobbs v Jackson Women’s Health Organization expressly overruled the 1973 decision in Roe v Wade and the 1992 Planned Parenthood v Casey decision. This study will canvass the bases upon which the court in Dobbs overruled longstanding precedent established in Roe and Casey. It will examine the potential implications for the LGBTQ community of the result in Dobbs. The potential far-reaching consequences of this case are foreshadowed in a concurring opinion by Justice Clarence Thomas, suggesting the Court should revisit all substantive due process cases. This includes notably the Lawrence v Texas case (invalidating sodomy laws criminalizing same-sex relations) and the Obergefellcase (upholding same-sex marriage). Finally, the study will examine the likely impact of the uncertainty brought about by the decision in Doddsfor LGBTQ students in US educational institutions. The actions of several states post-Dobbs, reflects and exacerbates the problems facing LGBTQ+ students and uncovers and highlights societal homophobia and transphobia.Keywords: human rights, LGBT rights, right to personal dignity and autonomy, substantive due process rights
Procedia PDF Downloads 104382 Comprehensive Validation of High-Performance Liquid Chromatography-Diode Array Detection (HPLC-DAD) for Quantitative Assessment of Caffeic Acid in Phenolic Extracts from Olive Mill Wastewater
Authors: Layla El Gaini, Majdouline Belaqziz, Meriem Outaki, Mariam Minhaj
Abstract:
In this study, it introduce and validate a high-performance liquid chromatography method with diode-array detection (HPLC-DAD) specifically designed for the accurate quantification of caffeic acid in phenolic extracts obtained from olive mill wastewater. The separation process of caffeic acid was effectively achieved through the use of an Acclaim Polar Advantage column (5µm, 250x4.6mm). A meticulous multi-step gradient mobile phase was employed, comprising water acidified with phosphoric acid (pH 2.3) and acetonitrile, to ensure optimal separation. The diode-array detection was adeptly conducted within the UV–VIS spectrum, spanning a range of 200–800 nm, which facilitated precise analytical results. The method underwent comprehensive validation, addressing several essential analytical parameters, including specificity, repeatability, linearity, as well as the limits of detection and quantification, alongside measurement uncertainty. The generated linear standard curves displayed high correlation coefficients, underscoring the method's efficacy and consistency. This validated approach is not only robust but also demonstrates exceptional reliability for the focused analysis of caffeic acid within the intricate matrices of wastewater, thus offering significant potential for applications in environmental and analytical chemistry.Keywords: high-performance liquid chromatography (HPLC-DAD), caffeic acid analysis, olive mill wastewater phenolics, analytical method validation
Procedia PDF Downloads 72381 Agrarian Distress and out Migration of Youths: Study of a Wet Land Village in Hirakud Command Area, Odisha
Authors: Kishor K. Podh
Abstract:
Agriculture in India treated as the backbone of its economy. It has been accommodated to more than 60 percent of its population as their economic base, directly or indirectly for their livelihood. Besides its significant role, the sharp declines in public investment and development in agriculture have witnessed. After independence Hirakud Command Area (HCA) popularly known as the Rice Bowl of State, due to its fabulous production and provides food to a larger part of the state. After the great green revolution and then liberalization agrarian families become overburden with the loan. They started working as wage laborer in other’s field and non-farm sectors to overcome from the uninvited indebtedness. Although production increases at present, still the youths of this area migrating outsides for job Tamil Nadu, Andhra Pradesh, Maharashtra, Gujarat, etc. Because agriculture no longer remains a profitable occupation; increasing input costs, the uncertainty of crops, improper pricing, poor marketing, etc. compels the youths to choose the alternative occupations. They work in industries (under contractors), construction workers and other menial jobs due to lack of skills and degrees. Kharmunda a village within HCA selected as per the convenience and 100 youth migrants were interviewed purposively selected who were present during data collection. The study analyses the types of migration; its similarity/differentiations, its determining factors, in tow geographical areas of Western Odisha, i.e., single crop and double crops in relation to agricultural situations.Keywords: agrarian distress, double crops, Hirakud Command Area, indebtedness, out migration, Western Odisha
Procedia PDF Downloads 336380 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function
Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos
Abstract:
Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function
Procedia PDF Downloads 309379 An Investigation of the Relationship between Organizational Culture and Innovation Type: A Mixed Method Study Using the OCAI in a Telecommunication Company in Saudi Arabia
Authors: A. Almubrad, R. Clouse, A. Aljlaoud
Abstract:
Organizational culture (OC) is recognized to have an influence on the propensity of organizations to innovate. It is also presumed that it may impede the innovation process from thriving within the organization. Investigating the role organizational culture plays in enabling or inhibiting innovation merits exploration to investigate organizational cultural attributes necessary to reach innovation goals. This study aims to investigate a preliminary matching heuristic of OC attributes to the type of innovation that has the potential to thrive within those attributes. A mixed methods research approach was adopted to achieve the research aims. Accordingly, participants from a national telecom company in Saudi Arabia took the Organizational Culture Assessment Instrument (OCAI). A further sample selected from the respondents’ pool holding the role of managing directors was interviewed in the qualitative phase. Our study findings reveal that the market culture type has a tendency to adopt radical innovations to disrupt the market and to preserve its market position. In contrast, we find that the adhocracy culture type tends to adopt the incremental innovation type and found this tends to be more convenient for employees due to its low levels of uncertainty. Our results are an encouraging indication that matching organizational culture attributes to the type of innovation aids in innovation management. This study carries limitations while drawing its findings from a limited sample of OC attributes that identify with the adhocracy and market culture types. An extended investigation is merited to explore other types of organizational cultures and their optimal innovation types.Keywords: incremental innovation, radical innovation, organization culture, market culture, adhocracy culture, OACI
Procedia PDF Downloads 107378 Seismic Loss Assessment for Peruvian University Buildings with Simulated Fragility Functions
Authors: Jose Ruiz, Jose Velasquez, Holger Lovon
Abstract:
Peruvian university buildings are critical structures for which very little research about its seismic vulnerability is available. This paper develops a probabilistic methodology that predicts seismic loss for university buildings with simulated fragility functions. Two university buildings located in the city of Cusco were analyzed. Fragility functions were developed considering seismic and structural parameters uncertainty. The fragility functions were generated with the Latin Hypercube technique, an improved Montecarlo-based method, which optimizes the sampling of structural parameters and provides at least 100 reliable samples for every level of seismic demand. Concrete compressive strength, maximum concrete strain and yield stress of the reinforcing steel were considered as the key structural parameters. The seismic demand is defined by synthetic records which are compatible with the elastic Peruvian design spectrum. Acceleration records are scaled based on the peak ground acceleration on rigid soil (PGA) which goes from 0.05g to 1.00g. A total of 2000 structural models were considered to account for both structural and seismic variability. These functions represent the overall building behavior because they give rational information regarding damage ratios for defined levels of seismic demand. The university buildings show an expected Mean Damage Factor of 8.80% and 19.05%, respectively, for the 0.22g-PGA scenario, which was amplified by the soil type coefficient and resulted in 0.26g-PGA. These ratios were computed considering a seismic demand related to 10% of probability of exceedance in 50 years which is a requirement in the Peruvian seismic code. These results show an acceptable seismic performance for both buildings.Keywords: fragility functions, university buildings, loss assessment, Montecarlo simulation, latin hypercube
Procedia PDF Downloads 144377 Effectiveness of Enhancing Positive Emotion Program of Patients with Lung Cancer
Authors: Pei-Fan Mu
Abstract:
Background: Lung cancer is the most common cancer with the highest mortality rate. Patients with lung cancer under chemotherapy treatment experience life-threatening uncertainty. This study was based on the broaden-and-build theory using intentionality reflection of the body and internalization of positive prioritization strategies to enhance positive emotions of patients with lung cancer. Purpose: The purpose of this study was to use a quasi-experimental research design to examine the effectiveness of the enhancing positive emotion program. Method: Data were collected from a medical center in Taiwan. Fifty-four participants with lung cancer were recruited. Thirty participants were in the experiential group receiving the two weeks program. The content of the program includes awareness and understanding of the symptom experience, co-existing with illness and establishing self-identity, cognitive-emotion adjustment and establishing a new body schema, and symptom management to reach spiritual well-being. Twenty-four participants were in the control group receiving regular nursing care. Baseline, one month later and two months later, programmed measurements of symptoms of distress, positive emotion, and psychological well-being. Results: These two weeks of enhancing the positive emotion program resulted in a significantly improved positive emotion score for the experimental group compared to the control group. The findings of this study indicated that the positive emotion had significant differences between the two groups. There were no differences in symptom distress between the two groups. Discussion: The findings indicated that the enhancing positive emotion program could help patients enhance their life-threatening facing conditions.Keywords: positive emotion, lung cancer, experimental design, symptom distress
Procedia PDF Downloads 100376 Accurate Binding Energy of Ytterbium Dimer from Ab Initio Calculations and Ultracold Photoassociation Spectroscopy
Authors: Giorgio Visentin, Alexei A. Buchachenko
Abstract:
Recent proposals to use Yb dimer as an optical clock and as a sensor for non-Newtonian gravity imply the knowledge of its interaction potential. Here, the ground-state Born-Oppenheimer Yb₂ potential energy curve is represented by a semi-analytical function, consisting of short- and long-range contributions. For the former, the systematic ab initio all-electron exact 2-component scalar-relativistic CCSD(T) calculations are carried out. Special care is taken to saturate diffuse basis set component with the atom- and bond-centered primitives and reach the complete basis set limit through n = D, T, Q sequence of the correlation-consistent polarized n-zeta basis sets. Similar approaches are used to the long-range dipole and quadrupole dispersion terms by implementing the CCSD(3) polarization propagator method for dynamic polarizabilities. Dispersion coefficients are then computed through Casimir-Polder integration. The semiclassical constraint on the number of the bound vibrational levels known for the ¹⁷⁴Yb isotope is used to scale the potential function. The scaling, based on the most accurate ab initio results, bounds the interaction energy of two Yb atoms within the narrow 734 ± 4 cm⁻¹ range, in reasonable agreement with the previous ab initio-based estimations. The resulting potentials can be used as the reference for more sophisticated models that go beyond the Born-Oppenheimer approximation and provide the means of their uncertainty estimations. The work is supported by Russian Science Foundation grant # 17-13-01466.Keywords: ab initio coupled cluster methods, interaction potential, semi-analytical function, ytterbium dimer
Procedia PDF Downloads 155375 Life Locked Up in Immigration Detention: An Exploratory Study of Education in Australian Refugee Prisons
Authors: Carly Hawkins
Abstract:
Forced migration is at unprecedented levels globally, and many countries have implemented harsh policies regarding people seeking asylum. Australia legislates one of the harshest and most controversial responses in the world, sending any asylum seeker arriving by boat to indefinite offshore immigration detention. This includes children, families and unaccompanied minors. Asylum seekers and refugees are detained indefinitely by the Australian government in the Pacific Island countries of Papua New Guinea and Nauru. Global research on the impact of immigration detention has primarily focused on mental health and psychological concerns for both adults and children. Research into Australian immigration detention has largely overlooked the schooling and education of children detained in Nauru, despite refugee children spending more than five years in detention, a significant portion of a child’s life. This research focused on the experience of education for children detained offshore in Nauru from 2013-2019. 21 qualitative interviews were conducted with children, parents and service providers between 2021-2022. Interviews explored experiences of schooling, power structures, and barriers and support to education. Findings show that a lack of belonging and lack of agency negatively affected school engagement. A sense of hopelessness and uncertainty also affected their motivation to attend school, with many children missing school for months and years. The research indicates that Australia’s current policy of offshore detention has been detrimental to children’s educational experiences.Keywords: asylum seeker, children, education, immigration detention, policy, refugee, school
Procedia PDF Downloads 77374 Parametric Study on the Development of Earth Pressures Behind Integral Bridge Abutments Under Cyclic Translational Movements
Authors: Lila D. Sigdel, Chin J. Leo, Samanthika Liyanapathirana, Pan Hu, Minghao Lu
Abstract:
Integral bridges are a class of bridges with integral or semi-integral abutments, designed without expansion joints in the bridge deck of the superstructure. Integral bridges are economical alternatives to conventional jointed bridges with lower maintenance costs and greater durability, thereby improving social and economic stability for the community. Integral bridges have also been proven to be effective in lowering the overall construction cost compared to the conventional type of bridges. However, there is significant uncertainty related to the design and analysis of integral bridges in response to cyclic thermal movements induced due to deck expansion and contraction. The cyclic thermal movements of the abutments increase the lateral earth pressures on the abutment and its foundation, leading to soil settlement and heaving of the backfill soil. Thus, the primary objective of this paper is to investigate the soil-abutment interaction under the cyclic translational movement of the abutment. Results from five experiments conducted to simulate different magnitudes of cyclic translational movements of abutments induced by thermal changes are presented, focusing on lateral earth pressure development at the abutment-soil interface. Test results show that the cycle number and magnitude of cyclic translational movements have significant effects on the escalation of lateral earth pressures. Experimentally observed earth pressure distributions behind the integral abutment were compared with the current design approaches, which shows that the most of the practices has under predicted the lateral earth pressure.Keywords: integral bridge, cyclic thermal movement, lateral earth pressure, soil-structure interaction
Procedia PDF Downloads 114373 The Term of Intellectual Property and Artificial Intelligence
Authors: Yusuf Turan
Abstract:
Definition of Intellectual Property Rights according to the World Intellectual Property Organization: " Intellectual property (IP) refers to creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names and images used in commerce." It states as follows. There are 2 important points in the definition; we can say that it is the result of intellectual activities that occur by one or more than one PERSON and as INNOVATION. When the history and development of the relevant definitions are briefly examined, it is realized that these two points have remained constant and Intellectual Property law and rights have been shaped around these two points. With the expansion of the scope of the term Intellectual Property as a result of the development of technology, especially in the field of artificial intelligence, questions such as "Can "Artificial Intelligence" be an inventor?" need to be resolved within the expanding scope. In the past years, it was ruled that the artificial intelligence named DABUS seen in the USA did not meet the definition of "individual" and therefore would be an inventor/inventor. With the developing technology, it is obvious that we will encounter such situations much more frequently in the field of intellectual property. While expanding the scope, we should definitely determine the boundaries of how we should decide who performs the mental activity or creativity that we call indispensable on the inventor/inventor according to these problems. As a result of all these problems and innovative situations, it is clearly realized that not only Intellectual Property Law and Rights but also their definitions need to be updated and improved. Ignoring the situations that are outside the scope of the current Intellectual Property Term is not enough to solve the problem and brings uncertainty. The fact that laws and definitions that have been operating on the same theories for years exclude today's innovative technologies from the scope contradicts intellectual property, which is expressed as a new and innovative field. Today, as a result of the innovative creation of poetry, painting, animation, music and even theater works with artificial intelligence, it must be recognized that the definition of Intellectual Property must be revised.Keywords: artificial intelligence, innovation, the term of intellectual property, right
Procedia PDF Downloads 72372 [Keynote Talk] The Practices and Issues of Career Education: Focusing on Career Development Course on Various Problems of Society
Authors: Azusa Katsumata
Abstract:
Several universities in Japan have introduced activities aimed at the mutual enlightenment of a diversity of people in career education. However, several programs emphasize on delivering results, and on practicing the prepared materials as planned. Few programs focus on unexpected failures and setbacks. This way of learning is important in career education so that classmates can help each other, overcome difficulties, draw out each other’s strengths, and learn from them. Seijo University in Tokyo offered excursion focusing Various Problems of Society, as second year career education course, Students will learn about contraception, infertility, homeless people, LGBT, and they will discuss based on the excursion. This paper aims to study the ‘learning platform’ created by a series of processes such as the excursion, the discussion, and the presentation. In this course, students looked back on their lives and imagined the future in concrete terms, performing tasks in groups. The students came across a range of values through lectures and conversations, thereby developing feelings of self-efficacy. We conducted a questionnaire to measure the development of career in class. From the results of the questionnaire, we can see, in the example of this class, that students respected diversity and understood the importance of uncertainty and discontinuity. Whereas the students developed career awareness, they actually did not come across that scene and would do so only in the future when it became necessary. In this class, students consciously considered social problems, but did not develop the practical skills necessary to deal with these. This is appropriate for one of project, but we need to consider how this can be incorporated into future courses. University constitutes only a single period in life-long career formation. Thus, further research may be indicated to determine whether the positive effects of career education at university continue to contribute to individual careers going forward.Keywords: career education of university, excursion, learning platform, problems of society
Procedia PDF Downloads 263371 Time Pressure and Its Effect at Tactical Level of Disaster Management
Authors: Agoston Restas
Abstract:
Introduction: In case of managing disasters decision makers can face many times such a special situation where any pre-sign of the drastically change is missing therefore the improvised decision making can be required. The complexity, ambiguity, uncertainty or the volatility of the situation can require many times the improvisation as decision making. It can be taken at any level of the management (strategic, operational and tactical) but at tactical level the main reason of the improvisation is surely time pressure. It is certainly the biggest problem during the management. Methods: The author used different tools and methods to achieve his goals; one of them was the study of the relevant literature, the other one was his own experience as a firefighting manager. Other results come from two surveys that are referred to; one of them was an essay analysis, the second one was a word association test, specially created for the research. Results and discussion: This article proves that, in certain situations, the multi-criteria, evaluating decision-making processes simply cannot be used or only in a limited manner. However, it can be seen that managers, directors or commanders are many times in situations that simply cannot be ignored when making decisions which should be made in a short time. The functional background of decisions made in a short time, their mechanism, which is different from the conventional, was studied lately and this special decision procedure was given the name recognition-primed decision. In the article, author illustrates the limits of the possibilities of analytical decision-making, presents the general operating mechanism of recognition-primed decision-making, elaborates on its special model relevant to managers at tactical level, as well as explore and systemize the factors that facilitate (catalyze) the processes with an example with fire managers.Keywords: decision making, disaster managers, recognition primed decision, model for making decisions in emergencies
Procedia PDF Downloads 261370 Development of a Geomechanical Risk Assessment Model for Underground Openings
Authors: Ali Mortazavi
Abstract:
The main objective of this research project is to delve into a multitude of geomechanical risks associated with various mining methods employed within the underground mining industry. Controlling geotechnical design parameters and operational factors affecting the selection of suitable mining techniques for a given underground mining condition will be considered from a risk assessment point of view. Important geomechanical challenges will be investigated as appropriate and relevant to the commonly used underground mining methods. Given the complicated nature of rock mass in-situ and complicated boundary conditions and operational complexities associated with various underground mining methods, the selection of a safe and economic mining operation is of paramount significance. Rock failure at varying scales within the underground mining openings is always a threat to mining operations and causes human and capital losses worldwide. Geotechnical design is a major design component of all underground mines and basically dominates the safety of an underground mine. With regard to uncertainties that exist in rock characterization prior to mine development, there are always risks associated with inappropriate design as a function of mining conditions and the selected mining method. Uncertainty often results from the inherent variability of rock masse, which in turn is a function of both geological materials and rock mass in-situ conditions. The focus of this research is on developing a methodology which enables a geomechanical risk assessment of given underground mining conditions. The outcome of this research is a geotechnical risk analysis algorithm, which can be used as an aid in selecting the appropriate mining method as a function of mine design parameters (e.g., rock in-situ properties, design method, governing boundary conditions such as in-situ stress and groundwater, etc.).Keywords: geomechanical risk assessment, rock mechanics, underground mining, rock engineering
Procedia PDF Downloads 147369 Water Governance Perspectives on the Urmia Lake Restoration Process: Challenges and Achievements
Authors: Jalil Salimi, Mandana Asadi, Naser Fathi
Abstract:
Urmia Lake (UL) has undergone a significant decline in water levels, resulting in severe environmental, socioeconomic, and health-related challenges. This paper examines the restoration process of UL from a water governance perspective. By applying a water governance model, the study evaluates the process based on six selected principles: stakeholder engagement, transparency and accountability, effectiveness, equitable water use, adaptation capacity, and water usage efficiency. The dominance of structural and physicalist approaches to water governance has led to a weak understanding of social and environmental issues, contributing to social crises. Urgent efforts are required to address the water crisis and reform water governance in the country, making water-related issues a top national priority. The UL restoration process has achieved significant milestones, including stakeholder consensus, scientific and participatory planning, environmental vision, intergenerational justice considerations, improved institutional environment for NGOs, investments in water infrastructure, transparency promotion, environmental effectiveness, and local issue resolutions. However, challenges remain, such as power distribution imbalances, bureaucratic administration, weak conflict resolution mechanisms, financial constraints, accountability issues, limited attention to social concerns, overreliance on structural solutions, legislative shortcomings, program inflexibility, and uncertainty management weaknesses. Addressing these weaknesses and challenges is crucial for the successful restoration and sustainable governance of UL.Keywords: evaluation, restoration process, Urmia Lake, water governance, water resource management
Procedia PDF Downloads 69368 Treating On-Demand Bonds as Cash-In-Hand: Analyzing the Use of “Unconscionability” as a Ground for Challenging Claims for Payment under On-Demand Bonds
Authors: Asanga Gunawansa, Shenella Fonseka
Abstract:
On-demand bonds, also known as unconditional bonds, are commonplace in the construction industry as a means of safeguarding the employer from any potential non-performance by a contractor. On-demand bonds may be obtained from commercial banks, and they serve as an undertaking by the issuing bank to honour payment on demand without questioning and/or considering any dispute between the employer and the contractor in relation to the underlying contract. Thus, whether or not a breach had occurred under the underlying contract, which triggers the demand for encashment by the employer, is not a question the bank needs to be concerned with. As a result, an unconditional bond allows the beneficiary to claim the money almost without any condition. Thus, an unconditional bond is as good as cash-in-hand. In the past, establishing fraud on the part of the employer, of which the bank had knowledge, was the only ground on which a bank could dishonour a claim made under an on-demand bond. However, recent jurisprudence in common law countries shows that courts are beginning to consider unconscionable conduct on the part of the employer in claiming under an on-demand bond as a ground that contractors could rely on the prevent the banks from honouring such claims. This has created uncertainty in connection with on-demand bonds and their liquidity. This paper analyzes recent judicial decisions in four common law jurisdictions, namely, England, Singapore, Hong Kong, and Sri Lanka, to identify the scope of using the concept of “unconscionability” as a ground for preventing unreasonable claims for encashment of on-demand bonds. The objective of this paper is to argue that on-demand bonds have lost their effectiveness as “cash-in-hand” and that this is, in fact, an advantage and not an impediment to international commerce, as the purpose of such bonds should not be to provide for illegal and unconscionable conduct by the beneficiaries.Keywords: fraud, performance guarantees, on-demand bonds, unconscionability
Procedia PDF Downloads 105367 Determining Design Parameters for Sizing of Hydronic Heating Systems in Concrete Thermally Activated Building Systems
Authors: Rahmat Ali, Inamullah Khan, Amjad Naseer, Abid A. Shah
Abstract:
Hydronic Heating and Cooling systems in concrete slab based buildings are increasingly becoming a popular substitute to conventional heating and cooling systems. In exploring the materials, techniques employed, and their relative performance measures, a fair bit of uncertainty exists. This research has identified the simplest method of determining the thermal field of a single hydronic pipe when acting as a part of a concrete slab, based on which the spacing and positioning of pipes for a best thermal performance and surface temperature control are determined. The pipe material chosen is the commonly used PEX pipe, which has an all-around performance and thermal characteristics with a thermal conductivity of 0.5W/mK. Concrete Test samples were constructed and their thermal fields tested under varying input conditions. Temperature sensing devices were embedded into the wet concrete at fixed distances from the pipe and other touch sensing temperature devices were employed for determining the extent of the thermal field and validation studies. In the first stage, it was found that the temperature along a specific distance was the same and that heat dissipation occurred in well-defined layers. The temperature obtained in concrete was then related to the different control parameters including water supply temperature. From the results, the temperature of water required for a specific temperature rise in concrete is determined. The thermally effective area is also determined which is then used to calculate the pipe spacing and positioning for the desired level of thermal comfort.Keywords: thermally activated building systems, concrete slab temperature, thermal field, energy efficiency, thermal comfort, pipe spacing
Procedia PDF Downloads 339366 Authentic Engagement for Institutional Leadership: Implications for Educational Policy and Planning
Authors: Simeon Adebayo Oladipo
Abstract:
Institutional administrators are currently facing pressure and challenges in their daily operations. Reasons for this may include the increasing multiplicity, uncertainty and tension that permeate institutional leadership. Authentic engagement for institutional leadership is premised on the ethical foundation that the leaders in the schools are engaged. The institutional effectiveness is dependent on the relationship that exists between the leaders and employees in the workplace. Leader’s self-awareness, relational transparency, emotional control, strong moral code and accountability have a positive influence on authentic engagement which variably determines leadership effectiveness. This study therefore examined the role of authentic engagement in effective school leadership; explored the interrelationship of authentic engagement indices in school leadership. The study adopted the descriptive research of the survey type using a quantitative method to gather data through a questionnaire among school leaders in Lagos State Tertiary Institutions. The population for the study consisted of all Heads of Departments, Deans and Principal Officers in Lagos State Tertiary Institutions. A sample size of 255 Heads of Departments, Deans and Principal Officers participated in the study. The data gathered were analyzed using descriptive and inferential statistical tools. The findings indicated that authentic engagement plays a crucial role in increasing leadership effectiveness amongst Heads of Departments, Deans and Principal Officers. The study recommended among others that there is a need for effective measures to enhance authentic engagement of institutional leadership practices through relevant educational support systems and effective quality control.Keywords: authentic engagement, self-awareness, relational transparency, emotional control
Procedia PDF Downloads 70365 Assessment of Mountain Hydrological Processes in the Gumera Catchment, Ethiopia
Authors: Tewele Gebretsadkan Haile
Abstract:
Mountain terrains are essential to regional water resources by regulating hydrological processes that use downstream water supplies. Nevertheless, limited observed earth data in complex topography poses challenges for water resources regulation. That's why satellite product is implemented in this study. This study evaluates hydrological processes on mountain catchment of Gumera, Ethiopia using HBV-light model with satellite precipitation products (CHIRPS) for the temporal scale of 1996 to 2010 and area coverage of 1289 km2. The catchment is characterized by cultivation dominant and elevation ranges from 1788 to 3606 m above sea level. Three meteorological stations have been used for downscaling of the satellite data and one stream flow for calibration and validation. The result shows total annual water balance showed that precipitation 1410 mm, simulated 828 mm surface runoff compared to 1042 mm observed stream flow with actual evapotranspiration estimate 586mm and 1495mm potential evapotranspiration. The temperature range is 9°C in winter to 21°C. The catchment contributes 74% as quack runoff to the total runoff and 26% as lower groundwater storage, which sustains stream flow during low periods. The model uncertainty was measured using different metrics such as coefficient of determination, model efficiency, efficiency for log(Q) and flow weighted efficiency 0.76, 0.74, 0.66 and 0.70 respectively. The research result highlights that HBV model captures the mountain hydrology simulation and the result indicates quack runoff due to the traditional agricultural system, slope factor of the topography and adaptation measure for water resource management is recommended.Keywords: mountain hydrology, CHIRPS, Gumera, HBV model
Procedia PDF Downloads 16364 Quantification of the Gumera Catchment's Mountain Hydrological Processes in Ethiopia
Authors: Tewele Gebretsadkan Haile
Abstract:
Mountain terrains are essential to regional water resources by regulating hydrological processes that use downstream water supplies. Nevertheless, limited observed earth data in complex topography poses challenges for water resources regulation. That's why satellite product is implemented in this study. This study evaluates hydrological processes on mountain catchment of Gumera, Ethiopia using HBV-light model with satellite precipitation products (CHIRPS) for the temporal scale of 1996 to 2010 and area coverage of 1289 km2. The catchment is characterized by cultivation dominant and elevation ranges from 1788 to 3606 m above sea level. Three meteorological stations have been used for downscaling of the satellite data and one stream flow for calibration and validation. The result shows total annual water balance showed that precipitation 1410 mm, simulated 828 mm surface runoff compared to 1042 mm observed stream flow with actual evapotranspiration estimate 586mm and 1495mm potential evapotranspiration. The temperature range is 9°C in winter to 21°C. The catchment contributes 74% as quack runoff to the total runoff and 26% as lower groundwater storage, which sustains stream flow during low periods. The model uncertainty was measured using different metrics such as coefficient of determination, model efficiency, efficiency for log(Q) and flow weighted efficiency 0.76, 0.74, 0.66 and 0.70 respectively. The research result highlights that HBV model captures the mountain hydrology simulation and the result indicates quack runoff due to the traditional agricultural system, slope factor of the topography and adaptation measure for water resource management is recommended.Keywords: mountain hydrology, CHIRPS, HBV model, Gumera
Procedia PDF Downloads 14363 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 335362 Aggregation of Electric Vehicles for Emergency Frequency Regulation of Two-Area Interconnected Grid
Authors: S. Agheb, G. Ledwich, G.Walker, Z.Tong
Abstract:
Frequency control has become more of concern for reliable operation of interconnected power systems due to the integration of low inertia renewable energy sources to the grid and their volatility. Also, in case of a sudden fault, the system has less time to recover before widespread blackouts. Electric Vehicles (EV)s have the potential to cooperate in the Emergency Frequency Regulation (EFR) by a nonlinear control of the power system in case of large disturbances. The time is not adequate to communicate with each individual EV on emergency cases, and thus, an aggregate model is necessary for a quick response to prevent from much frequency deviation and the occurrence of any blackout. In this work, an aggregate of EVs is modelled as a big virtual battery in each area considering various aspects of uncertainty such as the number of connected EVs and their initial State of Charge (SOC) as stochastic variables. A control law was proposed and applied to the aggregate model using Lyapunov energy function to maximize the rate of reduction of total kinetic energy in a two-area network after the occurrence of a fault. The control methods are primarily based on the charging/ discharging control of available EVs as shunt capacity in the distribution system. Three different cases were studied considering the locational aspect of the model with the virtual EV either in the center of the two areas or in the corners. The simulation results showed that EVs could help the generator lose its kinetic energy in a short time after a contingency. Earlier estimation of possible contributions of EVs can help the supervisory control level to transmit a prompt control signal to the subsystems such as the aggregator agents and the grid. Thus, the percentage of EVs contribution for EFR will be characterized in the future as the goal of this study.Keywords: emergency frequency regulation, electric vehicle, EV, aggregation, Lyapunov energy function
Procedia PDF Downloads 100361 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies
Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey
Abstract:
Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.Keywords: climate change, downscaling, GCM, RCM
Procedia PDF Downloads 408360 Privatization and Ensuring Accountability in the Provision of Essential Services: A Case of Water in South Africa
Authors: Odufu Ifakachukwu Clifford
Abstract:
Developing country governments are struggling to meet the basic needs and demands of citizens, especially so for the rural poor. With tightly constrained budgets, these governments have followed the lead of developed countries that have sought to restructure public service delivery through privatization, contracting out, public-private partnerships, and similar reforms. Such reforms in service delivery are generally welcomed when it is believed that private sector partners are better equipped to provide certain services than are governments. With respect to basic and essential services, however, a higher degree of uncertainty and apprehension exists as the focus shifts from simply minimizing the costs of delivering services to broadening access to all citizens. The constitution stipulates that everyone has the right to have access to sufficient food and water. Affordable and/or subsidized water, then, is not a privilege but a basic right of all citizens. Citizens elect political representatives to serve in office, with their sole mandate being to provide for the needs of the citizenry. As governments pass on some amount of responsibility for service delivery to private businesses, these governments must be able to exercise control in order to account to the people for the work done by private partners. This paper examines the legislative and policy frameworks as well as the environment within which PPPs take place in South Africa and the extent to which accountability can be strengthened in this environment. Within the aforementioned backdrop of PPPs and accountability, the constricted focus area of the paper aims to assess the extent to which the provision of clean and safe consumable water in South Africa is sustainable, cost-effective in terms of provision, and affordable to all.Keywords: privatisation, accountability, essential services, government
Procedia PDF Downloads 69359 Investigating the Feasibility of Promoting Safety in Civil Projects by BIM System Using Fuzzy Logic
Authors: Mohammad Reza Zamanian
Abstract:
The construction industry has always been recognized as one of the most dangerous available industries, and the statistics of accidents and injuries resulting from it say that the safety category needs more attention and the arrival of up-to-date technologies in this field. Building information modeling (BIM) is one of the relatively new and applicable technologies in Iran, that the necessity of using it is increasingly evident. The main purposes of this research are to evaluate the feasibility of using this technology in the safety sector of construction projects and to evaluate the effectiveness and operationality of its various applications in this sector. These applications were collected and categorized after reviewing past studies and researches then a questionnaire based on Delphi method criteria was presented to 30 experts who were thoroughly familiar with modeling software and safety guidelines. After receiving and exporting the answers to SPSS software, the validity and reliability of the questionnaire were assessed to evaluate the measuring tools. Fuzzy logic is a good way to analyze data because of its flexibility in dealing with ambiguity and uncertainty issues, and the implementation of the Delphi method in the fuzzy environment overcomes the uncertainties in decision making. Therefore, this method was used for data analysis, and the results indicate the usefulness and effectiveness of BIM in projects and improvement of safety status at different stages of construction. Finally, the applications and the sections discussed were ranked in order of priority for efficiency and effectiveness. Safety planning is considered as the most influential part of the safety of BIM among the four sectors discussed, and planning for the installation of protective fences and barriers to prevent falls and site layout planning with a safety approach based on a 3D model are the most important applications of BIM among the 18 applications to improve the safety of construction projects.Keywords: building information modeling, safety of construction projects, Delphi method, fuzzy logic
Procedia PDF Downloads 169358 Comparing Field Displacement History with Numerical Results to Estimate Geotechnical Parameters: Case Study of Arash-Esfandiar-Niayesh under Passing Tunnel, 2.5 Traffic Lane Tunnel, Tehran, Iran
Authors: A. Golshani, M. Gharizade Varnusefaderani, S. Majidian
Abstract:
Underground structures are of those structures that have uncertainty in design procedures. That is due to the complexity of soil condition around. Under passing tunnels are also such affected structures. Despite geotechnical site investigations, lots of uncertainties exist in soil properties due to unknown events. As results, it possibly causes conflicting settlements in numerical analysis with recorded values in the project. This paper aims to report a case study on a specific under passing tunnel constructed by New Austrian Tunnelling Method in Iran. The intended tunnel has an overburden of about 11.3m, the height of 12.2m and, the width of 14.4m with 2.5 traffic lane. The numerical modeling was developed by a 2D finite element program (PLAXIS Version 8). Comparing displacement histories at the ground surface during the entire installation of initial lining, the estimated surface settlement was about four times the field recorded one, which indicates that some local unknown events affect that value. Also, the displacement ratios were in a big difference between the numerical and field data. Consequently, running several numerical back analyses using laboratory and field tests data, the geotechnical parameters were accurately revised to match with the obtained monitoring data. Finally, it was found that usually the values of soil parameters are conservatively low-estimated up to 40 percent by typical engineering judgment. Additionally, it could be attributed to inappropriate constitutive models applied for the specific soil condition.Keywords: NATM, surface displacement history, numerical back-analysis, geotechnical parameters
Procedia PDF Downloads 194357 Current Methods for Drug Property Prediction in the Real World
Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh
Abstract:
Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning
Procedia PDF Downloads 83356 Time Driven Activity Based Costing Capability to Improve Logistics Performance: Application in Manufacturing Context
Authors: Siham Rahoui, Amr Mahfouz, Amr Arisha
Abstract:
In a highly competitive environment characterised by uncertainty and disruptions, such as the recent COVID-19 outbreak, supply chains (SC) face the challenge of maintaining their cost at minimum levels while continuing to provide customers with high-quality products and services. More importantly, businesses in such an economic context strive to maintain survival by keeping the cost of undertaken activities (such as logistics) low and in-house. To do so, managers need to understand the costs associated with different products and services in order to have a clear vision of the SC performance, maintain profitability levels, and make strategic decisions. In this context, SC literature explored different costing models that sought to determine the costs of undertaking supply chain-related activities. While some cost accounting techniques have been extensively explored in the SC context, more contributions are needed to explore the potential of time driven activity-based costing (TDABC). More specifically, more applications are needed in the manufacturing context of the SC, where the debate is ongoing. The aim of the study is to assess the capability of the technique to assess the operational performance of the logistics function. Through a case study methodology applied to a manufacturing company operating in the automotive industry, TDABC evaluates the efficiency of the current configuration and its logistics processes. The study shows that monitoring the process efficiency and cost efficiency leads to strategic decisions that contributed to improve the overall efficiency of the logistics processes.Keywords: efficiency, operational performance, supply chain costing, time driven activity based costing
Procedia PDF Downloads 168355 The Shannon Entropy and Multifractional Markets
Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese
Abstract:
Introduced by Shannon in 1948 in the field of information theory as the average rate at which information is produced by a stochastic set of data, the concept of entropy has gained much attention as a measure of uncertainty and unpredictability associated with a dynamical system, eventually depicted by a stochastic process. In particular, the Shannon entropy measures the degree of order/disorder of a given signal and provides useful information about the underlying dynamical process. It has found widespread application in a variety of fields, such as, for example, cryptography, statistical physics and finance. In this regard, many contributions have employed different measures of entropy in an attempt to characterize the financial time series in terms of market efficiency, market crashes and/or financial crises. The Shannon entropy has also been considered as a measure of the risk of a portfolio or as a tool in asset pricing. This work investigates the theoretical link between the Shannon entropy and the multifractional Brownian motion (mBm), stochastic process which recently is the focus of a renewed interest in finance as a driving model of stochastic volatility. In particular, after exploring the current state of research in this area and highlighting some of the key results and open questions that remain, we show a well-defined relationship between the Shannon (log)entropy and the memory function H(t) of the mBm. In details, we allow both the length of time series and time scale to change over analysis to study how the relation modify itself. On the one hand, applications are developed after generating surrogates of mBm trajectories based on different memory functions; on the other hand, an empirical analysis of several international stock indexes, which confirms the previous results, concludes the work.Keywords: Shannon entropy, multifractional Brownian motion, Hurst–Holder exponent, stock indexes
Procedia PDF Downloads 111354 Carbohydrate-Based Recommendations as a Basis for Dietary Guidelines
Authors: A. E. Buyken, D. J. Mela, P. Dussort, I. T. Johnson, I. A. Macdonald, A. Piekarz, J. D. Stowell, F. Brouns
Abstract:
Recently a number of renewed dietary guidelines have been published by various health authorities. The aim of the present work was 1) to review the processes (systematic approach/review, inclusion of public consultation) and methodological approaches used to identify and select the underpinning evidence base for the established recommendations for total carbohydrate (CHO), fiber and sugar consumption, and 2) examine how differences in the methods and processes applied may have influenced the final recommendations. A search of WHO, US, Canada, Australia and European sources identified 13 authoritative dietary guidelines with the desired detailed information. Each of these guidelines was evaluated for its scientific basis (types and grading of the evidence) and the processes by which the guidelines were developed Based on the data retrieved the following conclusions can be drawn: 1) Generally, a relatively high total CHO and fiber intake and limited intake of sugars (added or free) is recommended. 2) Even where recommendations are quite similar, the specific, justifications for quantitative/qualitative recommendations differ across authorities. 3) Differences appear to be due to inconsistencies in underlying definitions of CHO exposure and in the concurrent appraisal of CHO-providing foods and nutrients as well the choice and number of health outcomes selected for the evidence appraisal. 4) Differences in the selected articles, time frames or data aggregation method appeared to be of rather minor influence. From this assessment, the main recommendations are for: 1) more explicit quantitative justifications for numerical guidelines and communication of uncertainty; and 2) greater international harmonization, particularly with regard to underlying definitions of exposures and range of relevant nutrition-related outcomes.Keywords: carbohydrates, dietary fibres, dietary guidelines, recommendations, sugars
Procedia PDF Downloads 258