Search results for: strict uncertainty
264 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data
Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin
Abstract:
The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline
Procedia PDF Downloads 309263 Standard Essential Patents for Artificial Intelligence Hardware and the Implications For Intellectual Property Rights
Authors: Wendy de Gomez
Abstract:
Standardization is a critical element in the ability of a society to reduce uncertainty, subjectivity, misrepresentation, and interpretation while simultaneously contributing to innovation. Technological standardization is critical to codify specific operationalization through legal instruments that provide rules of development, expectation, and use. In the current emerging technology landscape Artificial Intelligence (AI) hardware as a general use technology has seen incredible growth as evidenced from AI technology patents between 2012 and 2018 in the United States Patent Trademark Office (USPTO) AI dataset. However, as outlined in the 2023 United States Government National Standards Strategy for Critical and Emerging Technology the codification through standardization of emerging technologies such as AI has not kept pace with its actual technological proliferation. This gap has the potential to cause significant divergent possibilities for the downstream outcomes of AI in both the short and long term. This original empirical research provides an overview of the standardization efforts around AI in different geographies and provides a background to standardization law. It quantifies the longitudinal trend of Artificial Intelligence hardware patents through the USPTO AI dataset. It seeks evidence of existing Standard Essential Patents from these AI hardware patents through a text analysis of the Statement of patent history and the Field of the invention of these patents in Patent Vector and examines their determination as a Standard Essential Patent and their inclusion in existing AI technology standards across the four main AI standards bodies- European Telecommunications Standards Institute (ETSI); International Telecommunication Union (ITU)/ Telecommunication Standardization Sector (-T); Institute of Electrical and Electronics Engineers (IEEE); and the International Organization for Standardization (ISO). Once the analysis is complete the paper will discuss both the theoretical and operational implications of F/Rand Licensing Agreements for the owners of these Standard Essential Patents in the United States Court and Administrative system. It will conclude with an evaluation of how Standard Setting Organizations (SSOs) can work with SEP owners more effectively through various forms of Intellectual Property mechanisms such as patent pools.Keywords: patents, artifical intelligence, standards, F/Rand agreements
Procedia PDF Downloads 87262 Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level
Authors: James Semmens, James Boyd, Anna Ferrante, Katrina Spilsbury, Sean Randall, Adrian Brown
Abstract:
‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.Keywords: data integration, data linkage, health planning, health services research
Procedia PDF Downloads 216261 Non-Time and Non-Sense: Temporalities of Addiction for Heroin Users in Scotland
Authors: Laura Roe
Abstract:
This study draws on twelve months of ethnographic fieldwork conducted in 2017 with heroin and poly-substance users in Scotland and explores experiences of time and temporality as factors in continuing drug use. The research largely took place over the year in which drug-related deaths in Scotland reached a record high, and were statistically recorded as the highest in Europe. This qualitative research is therefore significant in understanding both evolving patterns of drug use and the experiential lifeworlds of those who use heroin and other substances in high doses. Methodologies included participant observation, structured and semi-structured interviews, and unstructured conversations with twenty-two regular participants. The fieldwork was conducted in two needle exchanges, a community recovery group and in the community. The initial aim of the study was to assess evolving patterns of drug preferences in order to explore a clinical and user-reported rise in the use of novel psychoactive substances (NPS), which are typically considered to be highly potent, synthetic substances, often available at a low cost. It was found, however, that while most research participants had experimented with NPS with varying intensity, those who used every day regularly consumed heroin, methadone, and alcohol with benzodiazepines such as diazepam or anticonvulsants such as gabapentin. The research found that many participants deliberately pursued the non-fatal effects of overdose, aiming to induce states of dissociation, detachment and uneven consciousness, and did so by both mixing substances and experimenting with novel modes of consumption. Temporality was significant in the decision to consume cocktails of substances, as users described wishing to sever themselves from time; entering into states of ‘non-time’ and insensibility through specific modes of intoxication. Time and temporality similarly impacted other aspects of addicted life. Periods of attempted abstinence witnessed a slowing of time’s passage that was tied to affective states of boredom and melancholy, in addition to a disruptive return of distressing and difficult memories. Abject past memories frequently dominated and disrupted the present, which otherwise could be highly immersive due to the time and energy-consuming nature of seeking drugs while in financial difficulty. There was furthermore a discordance between individual user temporalities and the strict time-based regimes of recovery services and institutional bodies, and the study aims to highlight the impact of such a disjuncture on the efficacy of treatment programs. Many participants had difficulty in adhering to set appointments or temporal frameworks due to their specific temporal situatedness. Overall, exploring increasing tendencies of heroin users in Scotland towards poly-substance use, this study draws on experiences and perceptions of time, analysing how temporality comes to bear on the ways drugs are sought and consumed, and how recovery is imagined and enacted. The study attempts to outline the experiential, intimate and subjective worlds of heroin and poly-substance users while explicating the structural and historical factors that shape them.Keywords: addiction, poly-substance use, temporality, timelessness
Procedia PDF Downloads 118260 What Happens When We Try to Bridge the Science-Practice Gap? An Example from the Brazilian Native Vegetation Protection Law
Authors: Alice Brites, Gerd Sparovek, Jean Paul Metzger, Ricardo Rodrigues
Abstract:
The segregation between science and policy in decision making process hinders nature conservation efforts worldwide. Scientists have been criticized for not producing information that leads to effective solutions for environmental problems. In an attempt to bridge this gap between science and practice, we conducted a project aimed at supporting the implementation of the Brazilian Native Vegetation Protection Law (NVPL) implementation in São Paulo State (SP), Brazil. To do so, we conducted multiple open meetings with the stakeholders involved in this discussion. Throughout this process, we raised stakeholders' demands for scientific information and brought feedbacks about our findings. However, our main scientific advice was not taken into account during the NVPL implementation in SP. The NVPL has a mechanism that exempts landholders who converted native vegetation without offending the legislation in place at the time of the conversion from restoration requirements. We found out that there were no accurate spatialized data for native vegetation cover before the 1960s. Thus, the initial benchmark for the mechanism application should be the 1965 Brazilian Forest Act. Even so, SP kept the 1934 Brazilian Forest Act as the initial legal benchmark for the law application. This decision implies the use of a probabilistic native vegetation map that has uncertainty and subjectivity as its intrinsic characteristics, thus its use can lead to legal queries, corruption, and an unfair benefit application. But why this decision was made even after the scientific advice was vastly divulgated? We raised some possible reasons to explain it. First, the decision was made during a government transition, showing that circumstantial political events can overshadow scientific arguments. Second, the debate about the NVPL in SP was not pacified and powerful stakeholders could benefit from the confusion created by this decision. Finally, the native vegetation protection mechanism is a complex issue, with many technical aspects that can be hard to understand for a non-specialized courtroom, such as the one that made the final decision at SP. This example shows that science and decision-makers still have a long way ahead to improve their way to interact and that science needs to find its way to be heard above the political buzz.Keywords: Brazil, forest act, science-based dialogue, science-policy interface
Procedia PDF Downloads 122259 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control
Procedia PDF Downloads 162258 Family Medicine Residents in End-of-Life Care
Authors: Goldie Lynn Diaz, Ma. Teresa Tricia G. Bautista, Elisabeth Engeljakob, Mary Glaze Rosal
Abstract:
Introduction: Residents are expected to convey unfavorable news, discuss prognoses, and relieve suffering, and address do-not-resuscitate orders, yet some report a lack of competence in providing this type of care. Recognizing this need, Family Medicine residency programs are incorporating end-of-life care from symptom and pain control, counseling, and humanistic qualities as core proficiencies in training. Objective: This study determined the competency of Family Medicine Residents from various institutions in Metro Manila on rendering care for the dying. Materials and Methods: Trainees completed a Palliative Care Evaluation tool to assess their degree of confidence in patient and family interactions, patient management, and attitudes towards hospice care. Results: Remarkably, only a small fraction of participants were confident in performing independent management of terminal delirium and dyspnea. Fewer than 30% of residents can do the following without supervision: discuss medication effects and patient wishes after death, coping with pain, vomiting and constipation, and reacting to limited patient decision-making capacity. Half of the respondents had confidence in supporting the patient or family member when they become upset. Majority expressed confidence in many end-of-life care skills if supervision, coaching and consultation will be provided. Most trainees believed that pain medication should be given as needed to terminally ill patients. There was also uncertainty as to the most appropriate person to make end-of-life decisions. These attitudes may be influenced by personal beliefs rooted in cultural upbringing as well as by personal experiences with death in the family, which may also affect their participation and confidence in caring for the dying. Conclusion: Enhancing the quality and quantity of end-of-life care experiences during residency with sufficient supervision and role modeling may lead to knowledge and skill improvement to ensure quality of care. Fostering bedside learning opportunities during residency is an appropriate venue for teaching interventions in end-of-life care education.Keywords: end of life care, geriatrics, palliative care, residency training skill
Procedia PDF Downloads 257257 Dynamic Network Approach to Air Traffic Management
Authors: Catia S. A. Sima, K. Bousson
Abstract:
Congestion in the Terminal Maneuvering Areas (TMAs) of larger airports impacts all aspects of air traffic flow, not only at national level but may also induce arrival delays at international level. Hence, there is a need to monitor appropriately the air traffic flow in TMAs so that efficient decisions may be taken to manage their occupancy rates. It would be desirable to physically increase the existing airspace to accommodate all existing demands, but this question is entirely utopian and, given this possibility, several studies and analyses have been developed over the past decades to meet the challenges that have arisen due to the dizzying expansion of the aeronautical industry. The main objective of the present paper is to propose concepts to manage and reduce the degree of uncertainty in the air traffic operations, maximizing the interest of all involved, ensuring a balance between demand and supply, and developing and/or adapting resources that enable a rapid and effective adaptation of measures to the current context and the consequent changes perceived in the aeronautical industry. A central task is to emphasize the increase in air traffic flow management capacity to the present day, taking into account not only a wide range of methodologies but also equipment and/or tools already available in the aeronautical industry. The efficient use of these resources is crucial as the human capacity for work is limited and the actors involved in all processes related to air traffic flow management are increasingly overloaded and, as a result, operational safety could be compromised. The methodology used to answer and/or develop the issues listed above is based on the advantages promoted by the application of Markov Chain principles that enable the construction of a simplified model of a dynamic network that describes the air traffic flow behavior anticipating their changes and eventual measures that could better address the impact of increased demand. Through this model, the proposed concepts are shown to have potentials to optimize the air traffic flow management combined with the operation of the existing resources at each moment and the circumstances found in each TMA, using historical data from the air traffic operations and specificities found in the aeronautical industry, namely in the Portuguese context.Keywords: air traffic flow, terminal maneuvering area, TMA, air traffic management, ATM, Markov chains
Procedia PDF Downloads 133256 Fuzzy Decision Making to the Construction Project Management: Glass Facade Selection
Authors: Katarina Rogulj, Ivana Racetin, Jelena Kilic
Abstract:
In this study, the fuzzy logic approach (FLA) was developed for construction project management (CPM) under uncertainty and duality. The focus was on decision making in selecting the type of the glass facade for a residential-commercial building in the main design. The adoption of fuzzy sets was capable of reflecting construction managers’ reliability level over subjective judgments, and thus the robustness of the system can be achieved. An α-cuts method was utilized for discretizing the fuzzy sets in FLA. This method can communicate all uncertain information in the optimization process, taking into account the values of this information. Furthermore, FLA provides in-depth analyses of diverse policy scenarios that are related to various levels of economic aspects when it comes to the construction projects' valid decision making. The developed approach is applied to CPM to demonstrate its applicability. Analyzing the materials of glass facades, variants were defined. The development of the FLA for the CPM included relevant construction projec'ts stakeholders that were involved in the criteria definition to evaluate each variant. Using fuzzy Decision-Making Trial and Evaluation Laboratory Method (DEMATEL) comparison of the glass facade was conducted. This way, a rank, according to the priorities for inclusion into the main design, of variants is obtained. The concept was tested on a residential-commercial building in the city of Rijeka, Croatia. The newly developed methodology was then compared with the existing one. The aim of the research was to define an approach that will improve current judgments and decisions when it comes to the material selection of buildings facade as one of the most important architectural and engineering tasks in the main design. The advantage of the new methodology compared to the old one is that it includes the subjective side of the managers’ decisions, as an inevitable factor in each decision making. The proposed approach can help construction projects managers to identify the desired type of glass facade according to their preference and practical conditions, as well as facilitate in-depth analyses of tradeoffs between economic efficiency and architectural design.Keywords: construction projects management, DEMATEL, fuzzy logic approach, glass façade selection
Procedia PDF Downloads 137255 Migrant Women’s Rights “with Chinese Characteristics: The State of Migrant Women in the People’s Republic of China
Authors: Leigha C. Crout
Abstract:
This paper will investigate the categorical disregard of the People’s Republic of China (PRC) in establishing and maintaining a baseline standard of civil guarantees for economic migrant women and their dependents. In light of the relative forward strides in terms of policy facilitating the ascension of female workers in China, this oft-invisible subgroup of women remains neglected from the modern-day “iron rice bowl” of the self-identified communist state. This study is being undertaken to rectify the absence of data on this subject and provide a baseline for future studies on the matter, as the human rights of migrants has become an established facet of transnational dialogue and debate. The basic methodology of this research will consist of the evaluation of China’s compliance with its own national guidelines, and the eight international human rights law treaties it has ratified. Data will be extracted and cross-checked from a number of relevant sources to monitor the extent of compliance, including but by no means limited to the United Nations Human Rights Council (UNHRC) Universal Periodic Review (UPR) reports and responses, submissions and responses of international human rights treaty bodies, local and international nongovernmental organizations (NGOs) and their annual reports, and articles and commentaries authored by specialists on the modern state and implementation of Chinese law. Together, these data will illuminate the vast network of compliance that has forced many migrant women to work within situations of extreme economic precarity. The structure will proceed as follows: first, an outline of the current status of migrant workers and the enforcement of stipulated protections will be provided; next, the analysis of the oft-debated regulations directing and the outline of mandatory services guaranteed to external and internal migrants; and finally, a conclusion incorporating various recommendations to improve transparency and gradually decrease the amount of migrant work turned forced labor that typifies the economic migrant experience, especially in the case of women. The internal and international migrant workers in China are bound by different and uncomplimentary systems. The first, which governs Chinese citizens moving to different regions or provinces to find more sustainable employment (internal migrants), is called the hukou (or huji) residency system. This law enforces strict regulation of the movement of peoples, while ensuring that residents of urban areas receive preferential benefits to those received by their so-called “agricultural” resident counterparts. Given the overwhelming presence of the Communist Party of China throughout the vast state, the management of internal migrants and the disregard for foreign domestic workers is, at minimum, a surprising oversight. This paper endeavors to provide a much-needed foundation for future commentary and discussion on the treatment of female migrant workers and their families in the People’s Republic of China.Keywords: female migrant worker’s rights, the People’s Republic of China, forced labor, Hukou residency system
Procedia PDF Downloads 146254 Application of Thermal Dimensioning Tools to Consider Different Strategies for the Disposal of High-Heat-Generating Waste
Authors: David Holton, Michelle Dickinson, Giovanni Carta
Abstract:
The principle of geological disposal is to isolate higher-activity radioactive wastes deep inside a suitable rock formation to ensure that no harmful quantities of radioactivity reach the surface environment. To achieve this, wastes will be placed in an engineered underground containment facility – the geological disposal facility (GDF) – which will be designed so that natural and man-made barriers work together to minimise the escape of radioactivity. Internationally, various multi-barrier concepts have been developed for the disposal of higher-activity radioactive wastes. High-heat-generating wastes (HLW, spent fuel and Pu) provide a number of different technical challenges to those associated with the disposal of low-heat-generating waste. Thermal management of the disposal system must be taken into consideration in GDF design; temperature constraints might apply to the wasteform, container, buffer and host rock. Of these, the temperature limit placed on the buffer component of the engineered barrier system (EBS) can be the most constraining factor. The heat must therefore be managed such that the properties of the buffer are not compromised to the extent that it cannot deliver the required level of safety. The maximum temperature of a buffer surrounding a container at the centre of a fixed array of heat-generating sources, arises due to heat diffusing from neighbouring heat-generating wastes, incrementally contributing to the temperature of the EBS. A range of strategies can be employed for managing heat in a GDF, including the spatial arrangements or patterns of those containers; different geometrical configurations can influence the overall thermal density in a disposal facility (or area within a facility) and therefore the maximum buffer temperature. A semi-analytical thermal dimensioning tool and methodology have been applied at a generic stage to explore a range of strategies to manage the disposal of high-heat-generating waste. A number of examples, including different geometrical layouts and chequer-boarding, have been illustrated to demonstrate how these tools can be used to consider safety margins and inform strategic disposal options when faced with uncertainty, at a generic stage of the development of a GDF.Keywords: buffer, geological disposal facility, high-heat-generating waste, spent fuel
Procedia PDF Downloads 285253 Covid-19 Associated Stress and Coping Strategies
Authors: Bar Shapira-Youngster, Sima Amram-Vaknin, Yuliya Lipshits-Braziler
Abstract:
The study examined how 811 Israelis experienced and coped with the COVID-19 lockdown. Stress, uncertainty, and loss of control were reported as common emotional experiences. Two main difficulties were reported: Loneliness and health and emotional concerns. Frequent explanations for the virus's emergence were: scientific or faith reasoning. The most prevalent coping strategies were distraction activities and acceptance. Reducing the use of maladaptive coping strategies has important implications for mental health outcomes. Objectives: COVID-19 has been recognized as a collective, continuous traumatic stressor. The present study examined how individuals experienced, perceived, and coped with this traumatic event during the lockdown in Israel in April 2020. Method: 811 Israelis (71.3% were women; mean age 43.7, SD=13.3)completed an online semi-structured questionnaire consisting two sections: In the first section, participants were asked to report background information. In the second section, they were asked to answer 8 open-ended questions about their experience, perception, and coping with the covid-19 lockdown. Participation was voluntary, and anonymity was assured, they were not offered compensation of any kind. The data were subjected to qualitative content analysis that seeks to classify the participants` answers into an effective number of categories that represent similar meanings. Our content analysis of participants’ answers extended far beyond simple word counts; our objective was to try to identify recurrent categories that characterized participants’ responses to each question. We sought to ensure that the categories regarding the different questions are as mutually exclusive and exhaustive as possible. To ensure robust analysis, the data were initially analyzed by the first author, and a second opinion was then sought from research colleagues. Contribution: The present research expands our knowledge of individuals' experiences, perceptions, and coping mechanisms with continuous traumatic events. Reducing the use of maladaptive coping strategies has important implications for mental health outcomes.Keywords: Covid-19, emotional distress, coping, continuous traumatic event
Procedia PDF Downloads 130252 Identifying Factors of Wellbeing in Russian Orphans
Authors: Alexandra Telitsyna, Galina Semya, Elvira Garifulina
Abstract:
Introduction: Starting from 2012 Russia conducts deinstitutionalization policy and now the main indicator of success is the number of children living in institutions. Active family placement process has resulted in residents of the institution now mainly consists of adolescents with behavioral and emotional problems, children with disabilities and groups of siblings. Purpose of science research: The purpose of science research is to identify factors for child’s wellbeing while temporary stay in an orphanage and the subjective assessment of children's level of well-being (psychological well-being). Methods: The data used for this project was collected by the questionnaire of 72 indicators, a tool for monitoring the behavior of children and caregivers, an additional questionnaire for children; well-being assessment questionnaire containing 10 scales for three age groups from preschool to older adolescents. In 2016-2018, the research was conducted in 1873 institution in 85 regions of Russia. In each region a team of academics, specialists from Non-profits, independent experts was created. Training was conducted for team members through a series of webinars prior to undertaking the assessment. The results: To ensure the well-being of the children, the following conditions are necessary: 1- Life of children in institution is organised according to the principles of family care (including the creation of conditions for attachment to be formed); 2- Contribution to find family-based placement for children (including reintegration into the primary family); 3- Work with parents of children, who are placed in an organization at the request of parents; 4- Children attend schools according to their needs; 5- Training of staff and volunteers; 6- Special environment and services for children with special needs and children with disabilities; 7- Cooperation with NGOs; 8 - Openness and accessibility of the organization. Conclusion: A study of the psychological well-being of children showed that the most emotionally stressful for children were questions about the presence and frequency of contact with relatives, and the level of well-being is higher in the presence of a trusted adult and respect for rights. The greatest contribution to the trouble is made by the time the child is in the orphanage, the lack of contact with parents and relatives, the uncertainty of the future.Keywords: identifying factors, orphans, Russia, wellbeing
Procedia PDF Downloads 128251 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights
Authors: Olga Kokoulina
Abstract:
Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.Keywords: algorithms, public interest, trade secrets, transparency
Procedia PDF Downloads 124250 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach
Authors: Jiaxin Chen
Abstract:
Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification
Procedia PDF Downloads 93249 Climate Related Financial Risk on Automobile Industry and the Impact to the Financial Institutions
Authors: Mahalakshmi Vivekanandan S.
Abstract:
As per the recent changes happening in the global policies, climate-related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate-related changes can often happen and lead to risk and a lot of uncertainty, but needs to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed so that the financial institutions can plan to mitigate it. Climate-related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and other risk types. And the models required to compute this has to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out the suggestion that the climate-related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, the author presents a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios and how the different transition risks affect the risk associated with the different parties. This research paper delves into the topic of the increase in the concentration of greenhouse gases that in turn cause global warming. It then considers the various scenarios of having the different risk drivers impacting the Credit and market risk of an institution by understanding the transmission channels and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: the automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II Capital calculations and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.Keywords: capital calculation, climate risk, credit risk, pillar ii risk, scenario modeling
Procedia PDF Downloads 140248 Tonal Pitch Structure as a Tool of Social Consolidation
Authors: Piotr Podlipniak
Abstract:
Social consolidation has often been indicated as an adaptive function of music which led to the evolution of music faculty. According to many scholars this function is possible thanks to musical rhythm that enables sensorimotor synchronization to a musical beat. The ability to synchronize to music allows performing music collectively which enhances social cohesion. However, the collective performance of music consists also in spectral synchronization that depends on musical pitch structure. Similarly to rhythmic synchronization, spectral synchronization is a result of ‘brain states alignment’ between people who collectively listen to or perform music. In order to successfully synchronize pitches performers have to adequately expect the pitch structure. The most common form of music which predominates among all human societies is tonal music. In fact tonality understood in the broadest sense as such an organization of musical pitches in which some pitch is more important than others is the only kind of musical pitch structure that has been observed in all currently known musical cultures. The perception of such a musical pitch structure elicits specific emotional reactions which are often described as tensions and relaxations. These facts provoke some important questions. What is the evolutionary reason that people use pitch structure as a form of vocal communication? Why different pitch structures elicit different emotional states independent of extra-musical context? It is proposed in the current presentation that in the course of evolution pitch structure became a human specific tool of communication the function of which is to induce emotional states such as uncertainty and cohesion. By the means of eliciting these emotions during collective music performance people are able to unconsciously give cues concerning social acceptance. This is probably one of the reasons why in all cultures people collectively perform tonal music. It is also suggested that tonal pitch structure had been invented socially before it became an evolutionary innovation of Homo sapiens. It means that a predisposition to tonally organize pitches evolved by the means of ‘Baldwin effect’ – a process in which natural selection transforms the learned response of an organism into the instinctive response. The hypothetical evolutionary scenario of the emergence of tonal pitch structure will be proposed. In this scenario social forces such as a need for closer cooperation play the crucial role.Keywords: emotion, evolution, tonality, social consolidation
Procedia PDF Downloads 323247 Comprehensive Multilevel Practical Condition Monitoring Guidelines for Power Cables in Industries: Case Study of Mobarakeh Steel Company in Iran
Authors: S. Mani, M. Kafil, E. Asadi
Abstract:
Condition Monitoring (CM) of electrical equipment has gained remarkable importance during the recent years; due to huge production losses, substantial imposed costs and increases in vulnerability, risk and uncertainty levels. Power cables feed numerous electrical equipment such as transformers, motors, and electric furnaces; thus their condition assessment is of a very great importance. This paper investigates electrical, structural and environmental failure sources, all of which influence cables' performances and limit their uptimes; and provides a comprehensive framework entailing practical CM guidelines for maintenance of cables in industries. The multilevel CM framework presented in this study covers performance indicative features of power cables; with a focus on both online and offline diagnosis and test scenarios, and covers short-term and long-term threats to the operation and longevity of power cables. The study, after concisely overviewing the concept of CM, thoroughly investigates five major areas of power quality, Insulation Quality features of partial discharges, tan delta and voltage withstand capabilities, together with sheath faults, shield currents and environmental features of temperature and humidity; and elaborates interconnections and mutual impacts between those areas; using mathematical formulation and practical guidelines. Detection, location, and severity identification methods for every threat or fault source are also elaborated. Finally, the comprehensive, practical guidelines presented in the study are presented for the specific case of Electric Arc Furnace (EAF) feeder MV power cables in Mobarakeh Steel Company (MSC), the largest steel company in MENA region, in Iran. Specific technical and industrial characteristics and limitations of a harsh industrial environment like MSC EAF feeder cable tunnels are imposed on the presented framework; making the suggested package more practical and tangible.Keywords: condition monitoring, diagnostics, insulation, maintenance, partial discharge, power cables, power quality
Procedia PDF Downloads 228246 Explosive Clad Metals for Geothermal Energy Recovery
Authors: Heather Mroz
Abstract:
Geothermal fluids can provide a nearly unlimited source of renewable energy but are often highly corrosive due to dissolved carbon dioxide (CO2), hydrogen sulphide (H2S), Ammonia (NH3) and chloride ions. The corrosive environment drives material selection for many components, including piping, heat exchangers and pressure vessels, to higher alloys of stainless steel, nickel-based alloys and titanium. The use of these alloys is cost-prohibitive and does not offer the pressure rating of carbon steel. One solution, explosion cladding, has been proven to reduce the capital cost of the geothermal equipment while retaining the mechanical and corrosion properties of both the base metal and the cladded surface metal. Explosion cladding is a solid-state welding process that uses precision explosions to bond two dissimilar metals while retaining the mechanical, electrical and corrosion properties. The process is commonly used to clad steel with a thin layer of corrosion-resistant alloy metal, such as stainless steel, brass, nickel, silver, titanium, or zirconium. Additionally, explosion welding can join a wider array of compatible and non-compatible metals with more than 260 metal combinations possible. The explosion weld is achieved in milliseconds; therefore, no bulk heating occurs, and the metals experience no dilution. By adhering to a strict set of manufacturing requirements, both the shear strength and tensile strength of the bond will exceed the strength of the weaker metal, ensuring the reliability of the bond. For over 50 years, explosion cladding has been used in the oil and gas and chemical processing industries and has provided significant economic benefit in reduced maintenance and lower capital costs over solid construction. The focus of this paper will be on the many benefits of the use of explosion clad in process equipment instead of more expensive solid alloy construction. The method of clad-plate production with explosion welding as well as the methods employed to ensure sound bonding of the metals. It will also include the origins of explosion cladding as well as recent technological developments. Traditionally explosion clad plate was formed into vessels, tube sheets and heads but recent advances include explosion welded piping. The final portion of the paper will give examples of the use of explosion-clad metals in geothermal energy recovery. The classes of materials used for geothermal brine will be discussed, including stainless steels, nickel alloys and titanium. These examples will include heat exchangers (tube sheets), high pressure and horizontal separators, standard pressure crystallizers, piping and well casings. It is important to educate engineers and designers on material options as they develop equipment for geothermal resources. Explosion cladding is a niche technology that can be successful in many situations, like geothermal energy recovery, where high temperature, high pressure and corrosive environments are typical. Applications for explosion clad metals include vessel and heat exchanger components as well as piping.Keywords: clad metal, explosion welding, separator material, well casing material, piping material
Procedia PDF Downloads 154245 Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation
Authors: Constantin Z. Leshan
Abstract:
Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.Keywords: border of the Universe, causality violation, perfect isolation, quantum jumps
Procedia PDF Downloads 425244 Comparison of Rainfall Trends in the Western Ghats and Coastal Region of Karnataka, India
Authors: Vinay C. Doranalu, Amba Shetty
Abstract:
In recent days due to climate change, there is a large variation in spatial distribution of daily rainfall within a small region. Rainfall is one of the main end climatic variables which affect spatio-temporal patterns of water availability. The real task postured by the change in climate is identification, estimation and understanding the uncertainty of rainfall. This study intended to analyze the spatial variations and temporal trends of daily precipitation using high resolution (0.25º x 0.25º) gridded data of Indian Meteorological Department (IMD). For the study, 38 grid points were selected in the study area and analyzed for daily precipitation time series (113 years) over the period 1901-2013. Grid points were divided into two zones based on the elevation and situated location of grid points: Low Land (exposed to sea and low elevated area/ coastal region) and High Land (Interior from sea and high elevated area/western Ghats). Time series were applied to examine the spatial analysis and temporal trends in each grid points by non-parametric Mann-Kendall test and Theil-Sen estimator to perceive the nature of trend and magnitude of slope in trend of rainfall. Pettit-Mann-Whitney test is applied to detect the most probable change point in trends of the time period. Results have revealed remarkable monotonic trend in each grid for daily precipitation of the time series. In general, by the regional cluster analysis found that increasing precipitation trend in shoreline region and decreasing trend in Western Ghats from recent years. Spatial distribution of rainfall can be partly explained by heterogeneity in temporal trends of rainfall by change point analysis. The Mann-Kendall test shows significant variation as weaker rainfall towards the rainfall distribution over eastern parts of the Western Ghats region of Karnataka.Keywords: change point analysis, coastal region India, gridded rainfall data, non-parametric
Procedia PDF Downloads 294243 The Impact of Corruption on Exports and Innovation in Small and Medium-Sized Enterprises: The Case of Tunisia
Authors: Moujib Bahri, Rahim Kallel, Ouafa Sakka
Abstract:
Corruption is a phenomenon that increases uncertainty and risk of SMEs as it undermines the quality of the business environment and the easy access to public services. Our research builds on existing research on corruption's effects on economic growth at the firm level. Several papers have analyzed the effect of firms’ payments of bribes on their performance; however, only limited research has investigated the link between corruption, innovation, and exports. Drawing on principal-agent theory, we explore how corruption weakens the institutional context and makes the business environment unsound and not conducive to innovation and exports. This study employs data from The Enterprise Surveys conducted in Tunisia between March 2013 and July 2014 by the World Bank, the European Bank for Reconstruction and Development (EBRD) and the European Investment Bank (EIB). The main objective of this survey was to gain a better understanding of Tunisian firms’ perception of the environment in which they operate. Since 2011, the country's political situation has become fragile and unstable, and public services are perceived as inefficient and corrupt. We test our hypotheses on a sample of 537 Tunisian manufacturing SMEs using structural equation modeling and path analysis. We find that political instability leads to higher level of corruption, and that excessive business licensing regulations create a fertile ground for bribery. Our findings do not support the greasing hypothesis suggesting that corruption can reduce the negative effect of bureaucratic delays and the hard access of companies to public services related to innovation and exports. Instead, our results support the sanding hypothesis according to which corruption hinders innovation activities and exports. Furthermore, corruption is found to, negatively and significantly, impact firms’ ownership of quality certificates. Our results suggest that, in an environment with a high level of corruption, governments and policymakers interested in assisting SMEs with their innovation and export activities should have a better control on corruption to allow them developing those activities without being forced to bribe government officers.Keywords: corruption, innovation, exports, SMEs
Procedia PDF Downloads 180242 Addressing Food Grain Losses in India: Energy Trade-Offs and Nutrition Synergies
Authors: Matthew F. Gibson, Narasimha D. Rao, Raphael B. Slade, Joana Portugal Pereira, Joeri Rogelj
Abstract:
Globally, India’s population is among the most severely impacted by nutrient deficiency, yet millions of tonnes of food are lost before reaching consumers. Across food groups, grains represent the largest share of daily calories and overall losses by mass in India. If current losses remain unresolved and follow projected population rates, we estimate, by 2030, losses from grains for human consumption could increase by 1.3-1.8 million tonnes (Mt) per year against current levels of ~10 Mt per year. This study quantifies energy input to minimise storage losses across India, responsible for a quarter of grain supply chain losses. In doing so, we identify and explore a Sustainable Development Goal (SDG) triplet between SDG₂, SDG₇, and SDG₁₂ and provide insight for development of joined up agriculture and health policy in the country. Analyzing rice, wheat, maize, bajra, and sorghum, we quantify one route to reduce losses in supply chains, by modelling the energy input to maintain favorable climatic conditions in modern silo storage. We quantify key nutrients (calories, protein, zinc, iron, vitamin A) contained within these losses and calculate roughly how much deficiency in these dietary components could be reduced if grain losses were eliminated. Our modelling indicates, with appropriate uncertainty, maize has the highest energy input intensity for storage, at 110 kWh per tonne of grain (kWh/t), and wheat the lowest (72 kWh/t). This energy trade-off represents 8%-16% of the energy input required in grain production. We estimate if grain losses across the supply chain were saved and targeted to India’s nutritionally deficient population, average protein deficiency could reduce by 46%, calorie by 27%, zinc by 26%, and iron by 11%. This study offers insight for development of Indian agriculture, food, and health policy by first quantifying and then presenting benefits and trade-offs of tackling food grain losses.Keywords: energy, food loss, grain storage, hunger, India, sustainable development goal, SDG
Procedia PDF Downloads 129241 Fostering Organizational Learning across the Canadian Sport System through Leadership and Mentorship Development of Sport Science Leaders
Authors: Jennifer Walinga, Samantha Heron
Abstract:
The goal of the study was to inform the design of effective leadership and mentorship development programming for sport science leaders within the network of Canadian sport institutes and centers. The LEAD (Learn, Engage, Accelerate, Develop) program was implemented to equip sport science leaders with the leadership knowledge, skills, and practice to foster a high - performance culture, enhance the daily training environment, and contribute to optimal performance in sport. After two years of delivery, this analysis of LEAD’s effect on individual and organizational health and performance factors informs the quality of future deliveries and identifies best practice for leadership development across the Canadian sport system and beyond. A larger goal for this project was to inform the public sector more broadly and position sport as a source of best practice for human and social health, development, and performance. The objectives of this study were to review and refine the LEAD program in collaboration with Canadian Sport Institute and Centre leaders, 40-50 participants from three cohorts, and the LEAD program advisory committee, and to trace the effects of the LEAD leadership development program on key leadership mentorship and organizational health indicators across the Canadian sport institutes and centers so as to capture best practice. The study followed a participatory action research framework (PAR) using semi structured interviews with sport scientist participants, program and institute leaders inquiring into impact on specific individual and organizational health and performance factors. Findings included a strong increase in self-reported leadership knowledge, skill, language and confidence, enhancement of human and organizational health factors, and the opportunity to explore more deeply issues of diversity and inclusion, psychological safety, team dynamics, and performance management. The study was significant in building sport leadership and mentorship development strategies for managing change efforts, addressing inequalities, and building personal and operational resilience amidst challenges of uncertainty, pressure, and constraint in real time.Keywords: sport leadership, sport science leader, leadership development, professional development, sport education, mentorship
Procedia PDF Downloads 23240 Transformation of Antitrust Policy against Collusion in Russia and Transition Economies
Authors: Andrey Makarov
Abstract:
This article will focus on the development of antitrust policy in transition economies in the context of preventing explicit and tacit collusion. Experience of BRICS, CIS (Ukraine, Kazakhstan) and CEE countries (Bulgaria, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia, Czech Republic, Estonia) in the creation of antitrust institutions was analyzed, including both legislation and enforcement practice. Most of these countries in the early 90th were forced to develop completely new legislation in the field of protection of competition and it is important to compare different ways of building antitrust institutions and policy results. The article proposes a special approach to evaluation of preventing collusion mechanisms. This approach takes into account such enforcement problems as: classification problems (tacit vs explicit collusion, vertical vs horizontal agreements), flexibility of prohibitions (the balance between “per se” vs “rule of reason” approaches de jure and in practice), design of sanctions, private enforcement challenge, leniency program mechanisms, the role of antitrust authorities etc. The analysis is conducted using both official data, published by competition authorities, and expert assessments. The paper will show how the integration process within the EU predetermined some aspects of the development of antitrust policy in CEE countries, including the trend of the use of "rule of reason" approach. Simultaneously was analyzed the experience of CEE countries in special mechanisms of government intervention. CIS countries in the development of antitrust policy followed more or less original ways, without such a great impact from the European Union, more attention will be given to Russian experience in this field, including the analysis of judicial decisions in antitrust cases. Main problems and challenges for transition economies in this field will be shown, including: Legal uncertainty problem; Problem of rigidity of prohibitions; Enforcement priorities of the regulator; Interaction of administrative and criminal law, limited effectiveness of criminal sanctions in the antitrust field; The effectiveness of leniency program design; Private enforcement challenge.Keywords: collusion, antitrust policy, leniency program, transition economies, Russia, CEE
Procedia PDF Downloads 446239 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation
Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke
Abstract:
Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform
Procedia PDF Downloads 307238 Exposing The Invisible
Authors: Kimberley Adamek
Abstract:
According to the Council on Tall Buildings, there has been a rapid increase in the construction of tall or “megatall” buildings over the past two decades. Simultaneously, the New England Journal of Medicine has reported that there has been a steady increase in climate related natural disasters since the 1970s; the eastern expansion of the USA's infamous Tornado Alley being just one of many current issues. In the future, this could mean that tall buildings, which already guide high speed winds down to pedestrian levels would have to withstand stronger forces and protect pedestrians in more extreme ways. Although many projects are required to be verified within wind tunnels and a handful of cities such as San Francisco have included wind testing within building code standards, there are still many examples where wind is only considered for basic loading. This typically results in and an increase of structural expense and unwanted mitigation strategies that are proposed late within a project. When building cities, architects rarely consider how each building alters the invisible patterns of wind and how these alterations effect other areas in different ways later on. It is not until these forces move, overpower and even destroy cities that people take notice. For example, towers have caused winds to blow objects into people (Walkie-Talkie Tower, Leeds, England), cause building parts to vibrate and produce loud humming noises (Beetham Tower, Manchester), caused wind tunnels in streets as well as many other issues. Alternatively, there exist towers which have used their form to naturally draw in air and ventilate entire facilities in order to eliminate the needs for costly HVAC systems (The Met, Thailand) and used their form to increase wind speeds to generate electricity (Bahrain Tower, Dubai). Wind and weather exist and effect all parts of the world in ways such as: Science, health, war, infrastructure, catastrophes, tourism, shopping, media and materials. Working in partnership with a leading wind engineering company RWDI, a series of tests, images and animations documenting discovered interactions of different building forms with wind will be collected to emphasize the possibilities for wind use to architects. A site within San Francisco (due to its increasing tower development, consistently wind conditions and existing strict wind comfort criteria) will host a final design. Iterations of this design will be tested within the wind tunnel and computational fluid dynamic systems which will expose, utilize and manipulate wind flows to create new forms, technologies and experiences. Ultimately, this thesis aims to question the amount which the environment is allowed to permeate building enclosures, uncover new programmatic possibilities for wind in buildings, and push the boundaries of working with the wind to ensure the development and safety of future cities. This investigation will improve and expand upon the traditional understanding of wind in order to give architects, wind engineers as well as the general public the ability to broaden their scope in order to productively utilize this living phenomenon that everyone constantly feels but cannot see.Keywords: wind engineering, climate, visualization, architectural aerodynamics
Procedia PDF Downloads 358237 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis
Authors: H. Jung, N. Kim, B. Kang, J. Choe
Abstract:
History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.Keywords: history matching, principal component analysis, reservoir modelling, support vector machine
Procedia PDF Downloads 160236 Gamification of eHealth Business Cases to Enhance Rich Learning Experience
Authors: Kari Björn
Abstract:
Introduction of games has expanded the application area of computer-aided learning tools to wide variety of age groups of learners. Serious games engage the learners into a real-world -type of simulation and potentially enrich the learning experience. Institutional background of a Bachelor’s level engineering program in Information and Communication Technology is introduced, with detailed focus on one of its majors, Health Technology. As part of a Customer Oriented Software Application thematic semester, one particular course of “eHealth Business and Solutions” is described and reflected in a gamified framework. Learning a consistent view into vast literature of business management, strategies, marketing and finance in a very limited time enforces selection of topics relevant to the industry. Health Technology is a novel and growing industry with a growing sector in consumer wearable devices and homecare applications. The business sector is attracting new entrepreneurs and impatient investor funds. From engineering education point of view the sector is driven by miniaturizing electronics, sensors and wireless applications. However, the market is highly consumer-driven and usability, safety and data integrity requirements are extremely high. When the same technology is used in analysis or treatment of patients, very strict regulatory measures are enforced. The paper introduces a course structure using gamification as a tool to learn the most essential in a new market: customer value proposition design, followed by a market entry game. Students analyze the existing market size and pricing structure of eHealth web-service market and enter the market as a steering group of their company, competing against the legacy players and with each other. The market is growing but has its rules of demand and supply balance. New products can be developed with an R&D-investment, and targeted to market with unique quality- and price-combinations. Product cost structure can be improved by investing to enhanced production capacity. Investments can be funded optionally by foreign capital. Students make management decisions and face the dynamics of the market competition in form of income statement and balance sheet after each decision cycle. The focus of the learning outcome is to understand customer value creation to be the source of cash flow. The benefit of gamification is to enrich the learning experience on structure and meaning of financial statements. The paper describes the gamification approach and discusses outcomes after two course implementations. Along the case description of learning challenges, some unexpected misconceptions are noted. Improvements of the game or the semi-gamified teaching pedagogy are discussed. The case description serves as an additional support to new game coordinator, as well as helps to improve the method. Overall, the gamified approach has helped to engage engineering student to business studies in an energizing way.Keywords: engineering education, integrated curriculum, learning experience, learning outcomes
Procedia PDF Downloads 240235 The Impact of the COVID-19 Pandemic on the Mental Health of Families Dealing with Attention-Deficit Hyperactivity Disorder
Authors: Alexis Winfield, Carly Sugar, Barbara Fenesi
Abstract:
The COVID-19 pandemic uprooted regular routines forcing many children to learn from home, requiring many adults to work from home, and cutting families off from support outside the home. Public health restrictions associated with the pandemic caused widespread psychological distress, including depression and anxiety, increased fear, panic, and stress. These trends are particularly concerning for families raising neuroatypical children, such as those with Attention-Deficit Hyperactivity Disorder (ADHD), as these children are already more likely than their typically developing peers to experience comorbid mental health issues and to experience greater distress when required to stay indoors. Families with children who have ADHD are also at greater risk for experiencing heightened familial stress due to the challenges associated with managing ADHD behavioural symptoms, greater parental discord and divorce, and greater financial difficulties compared to other families. The current study engaged families comprised of at least one child diagnosed with ADHD to elucidate 1) the unique ways that the COVID-19 pandemic affected their mental health and 2) the specific barriers these families faced to maintaining optimal mental wellbeing. A total of 33 participants (15 parent-child dyads) engaged in virtual interviews. Content analysis revealed that the most frequently identified mental health effects for families were increased child anxiety and disconnectedness, as well as deteriorating parental mental health. The most frequently identified barriers to maintaining optimal mental well-being were lack of routine, lack of social interaction and social support, and uncertainty and fear. Findings underscore areas of need during times of large-scale social isolation, bring voice to the families of children with ADHD, and contribute to our understanding of the pandemic’s impact on the wellbeing of vulnerable families. This work contributes to a growing body of research aimed at creating safeguards to support mental wellbeing for vulnerable families during times of crisis.Keywords: attention-deficit hyperactivity disorder, COVID-19, mental health, vulnerable families
Procedia PDF Downloads 289