Search results for: theoretical domains framework
1477 Lexical Semantic Analysis to Support Ontology Modeling of Maintenance Activities– Case Study of Offshore Riser Integrity
Authors: Vahid Ebrahimipour
Abstract:
Word representation and context meaning of text-based documents play an essential role in knowledge modeling. Business procedures written in natural language are meant to store technical and engineering information, management decision and operation experience during the production system life cycle. Context meaning representation is highly dependent upon word sense, lexical relativity, and sematic features of the argument. This paper proposes a method for lexical semantic analysis and context meaning representation of maintenance activity in a mass production system. Our approach constructs a straightforward lexical semantic approach to analyze facilitates semantic and syntactic features of context structure of maintenance report to facilitate translation, interpretation, and conversion of human-readable interpretation into computer-readable representation and understandable with less heterogeneity and ambiguity. The methodology will enable users to obtain a representation format that maximizes shareability and accessibility for multi-purpose usage. It provides a contextualized structure to obtain a generic context model that can be utilized during the system life cycle. At first, it employs a co-occurrence-based clustering framework to recognize a group of highly frequent contextual features that correspond to a maintenance report text. Then the keywords are identified for syntactic and semantic extraction analysis. The analysis exercises causality-driven logic of keywords’ senses to divulge the structural and meaning dependency relationships between the words in a context. The output is a word contextualized representation of maintenance activity accommodating computer-based representation and inference using OWL/RDF.Keywords: lexical semantic analysis, metadata modeling, contextual meaning extraction, ontology modeling, knowledge representation
Procedia PDF Downloads 1061476 Examining Predictive Coding in the Hierarchy of Visual Perception in the Autism Spectrum Using Fast Periodic Visual Stimulation
Authors: Min L. Stewart, Patrick Johnston
Abstract:
Predictive coding has been proposed as a general explanatory framework for understanding the neural mechanisms of perception. As such, an underweighting of perceptual priors has been hypothesised to underpin a range of differences in inferential and sensory processing in autism spectrum disorders. However, empirical evidence to support this has not been well established. The present study uses an electroencephalography paradigm involving changes of facial identity and person category (actors etc.) to explore how levels of autistic traits (AT) affect predictive coding at multiple stages in the visual processing hierarchy. The study uses a rapid serial presentation of faces, with hierarchically structured sequences involving both periodic and aperiodic repetitions of different stimulus attributes (i.e., person identity and person category) in order to induce contextual expectations relating to these attributes. It investigates two main predictions: (1) significantly larger and late neural responses to change of expected visual sequences in high-relative to low-AT, and (2) significantly reduced neural responses to violations of contextually induced expectation in high- relative to low-AT. Preliminary frequency analysis data comparing high and low-AT show greater and later event-related-potentials (ERPs) in occipitotemporal areas and prefrontal areas in high-AT than in low-AT for periodic changes of facial identity and person category but smaller ERPs over the same areas in response to aperiodic changes of identity and category. The research advances our understanding of how abnormalities in predictive coding might underpin aberrant perceptual experience in autism spectrum. This is the first stage of a research project that will inform clinical practitioners in developing better diagnostic tests and interventions for people with autism.Keywords: hierarchical visual processing, face processing, perceptual hierarchy, prediction error, predictive coding
Procedia PDF Downloads 1121475 Evaluation of Student Satisfaction Level Towards Anadolu University E-Services through E-Government Model and Importance Performance Analysis Method
Authors: Emrah Ayhan, Puspa Saananta Irfani, Ömer Doğukan Şahin
Abstract:
Public services, which are important for the order and continuity of social life, have begun to transform into electronic services (E-service) with the development of information and communication technologies in recent years. In particular, as a result of the widespread use of the internet and the increase in citizen demands, it has become necessary to provide public services electronically. In addition to facilitating traditional public services, new types of e-services strengthen the interaction, cooperation, accessibility, transparency, citizen participation (e-governance) and accountability between citizens and the state. In this context, the factors in the literature that are considered to influence the citizens’ satisfaction towards e-services will be examined through the example of student satisfaction with the e-services (Anasis, Mergen, E-mail, library, cafeteria and other transactions) offered by Anadolu University (Eskişehir, Türkiye) through university website and mobile application. The data for the analysis will be obtained from the survey research that will be used to measure user satisfaction with university e-services of 1,000 students studying at 9 different faculties and graduate schools of Anadolu University. These data will be analyzed with a unique methodology that uses the E-GovQual model and Importance Performance Analysis (IPA) methods together. The e-GovQual model serves as a framework for evaluating the quality of e-services, allowing a detailed understanding of students' perceptions. On the other hand, the IPA method will be used to determine the performance level of Anadolu University in the provision of e-services and to understand the areas that require improvement and student expectations. Strategic goals and suggestions will be made to decision-makers, students, and researchers in line with the findings obtained in the research. Thus, it is planned to contribute to e-governance and user satisfaction in educational institutions and to reveal practical implications for optimizing online platforms to better serve student needs.Keywords: e-service, Anadolu university, student satisfaction, e-governance, e-govqual, importance performance analysis
Procedia PDF Downloads 571474 Analyzing the Emergence of Conscious Phenomena by the Process-Based Metaphysics
Authors: Chia-Lin Tu
Abstract:
Towards the end of the 20th century, a reductive picture has dominated in philosophy of science and philosophy of mind. Reductive physicalism claims that all entities and properties in this world are eventually able to be reduced to the physical level. It means that all phenomena in the world are able to be explained by laws of physics. However, quantum physics provides another picture. It says that the world is undergoing change and the energy of change is, in fact, the most important part to constitute world phenomena. Quantum physics provides us another point of view to reconsider the reality of the world. Throughout the history of philosophy of mind, reductive physicalism tries to reduce the conscious phenomena to physical particles as well, meaning that the reality of consciousness is composed by physical particles. However, reductive physicalism is unable to explain conscious phenomena and mind-body causation. Conscious phenomena, e.g., qualia, is not composed by physical particles. The current popular theory for consciousness is emergentism. Emergentism is an ambiguous concept which has not had clear idea of how conscious phenomena are emerged by physical particles. In order to understand the emergence of conscious phenomena, it seems that quantum physics is an appropriate analogy. Quantum physics claims that physical particles and processes together construct the most fundamental field of world phenomena, and thus all natural processes, i.e., wave functions, have occurred within. The traditional space-time description of classical physics is overtaken by the wave-function story. If this methodology of quantum physics works well to explain world phenomena, then it is not necessary to describe the world by the idea of physical particles like classical physics did. Conscious phenomena are one kind of world phenomena. Scientists and philosophers have tried to explain the reality of them, but it has not come out any conclusion. Quantum physics tells us that the fundamental field of the natural world is processed metaphysics. The emergence of conscious phenomena is only possible within this process metaphysics and has clearly occurred. By the framework of quantum physics, we are able to take emergence more seriously, and thus we can account for such emergent phenomena as consciousness. By questioning the particle-mechanistic concept of the world, the new metaphysics offers an opportunity to reconsider the reality of conscious phenomena.Keywords: quantum physics, reduction, emergence, qualia
Procedia PDF Downloads 1651473 Optimization of Titanium Leaching Process Using Experimental Design
Authors: Arash Rafiei, Carroll Moore
Abstract:
Leaching process as the first stage of hydrometallurgy is a multidisciplinary system including material properties, chemistry, reactor design, mechanics and fluid dynamics. Therefore, doing leaching system optimization by pure scientific methods need lots of times and expenses. In this work, a mixture of two titanium ores and one titanium slag are used for extracting titanium for leaching stage of TiO2 pigment production procedure. Optimum titanium extraction can be obtained from following strategies: i) Maximizing titanium extraction without selective digestion; and ii) Optimizing selective titanium extraction by balancing between maximum titanium extraction and minimum impurity digestion. The main difference between two strategies is due to process optimization framework. For the first strategy, the most important stage of production process is concerned as the main stage and rest of stages would be adopted with respect to the main stage. The second strategy optimizes performance of more than one stage at once. The second strategy has more technical complexity compared to the first one but it brings more economical and technical advantages for the leaching system. Obviously, each strategy has its own optimum operational zone that is not as same as the other one and the best operational zone is chosen due to complexity, economical and practical aspects of the leaching system. Experimental design has been carried out by using Taguchi method. The most important advantages of this methodology are involving different technical aspects of leaching process; minimizing the number of needed experiments as well as time and expense; and concerning the role of parameter interactions due to principles of multifactor-at-time optimization. Leaching tests have been done at batch scale on lab with appropriate control on temperature. The leaching tank geometry has been concerned as an important factor to provide comparable agitation conditions. Data analysis has been done by using reactor design and mass balancing principles. Finally, optimum zone for operational parameters are determined for each leaching strategy and discussed due to their economical and practical aspects.Keywords: titanium leaching, optimization, experimental design, performance analysis
Procedia PDF Downloads 3761472 Random Vertical Seismic Vibrations of the Long Span Cantilever Beams
Authors: Sergo Esadze
Abstract:
Seismic resistance norms require calculation of cantilevers on vertical components of the base seismic acceleration. Long span cantilevers, as a rule, must be calculated as a separate construction element. According to the architectural-planning solution, functional purposes and environmental condition of a designing buildings/structures, long span cantilever construction may be of very different types: both by main bearing element (beam, truss, slab), and by material (reinforced concrete, steel). A choice from these is always linked with bearing construction system of the building. Research of vertical seismic vibration of these constructions requires individual approach for each (which is not specified in the norms) in correlation with model of seismic load. The latest may be given both as deterministic load and as a random process. Loading model as a random process is more adequate to this problem. In presented paper, two types of long span (from 6m – up to 12m) reinforcement concrete cantilever beams have been considered: a) bearing elements of cantilevers, i.e., elements in which they fixed, have cross-sections with large sizes and cantilevers are made with haunch; b) cantilever beam with load-bearing rod element. Calculation models are suggested, separately for a) and b) types. They are presented as systems with finite quantity degree (concentrated masses) of freedom. Conditions for fixing ends are corresponding with its types. Vertical acceleration and vertical component of the angular acceleration affect masses. Model is based on assumption translator-rotational motion of the building in the vertical plane, caused by vertical seismic acceleration. Seismic accelerations are considered as random processes and presented by multiplication of the deterministic envelope function on stationary random process. Problem is solved within the framework of the correlation theory of random process. Solved numerical examples are given. The method is effective for solving the specific problems.Keywords: cantilever, random process, seismic load, vertical acceleration
Procedia PDF Downloads 1921471 Ytterbium Advantages for Brachytherapy
Authors: S. V. Akulinichev, S. A. Chaushansky, V. I. Derzhiev
Abstract:
High dose rate (HDR) brachytherapy is a method of contact radiotherapy, when a single sealed source with an activity of about 10 Ci is temporarily inserted in the tumor area. The isotopes Ir-192 and (much less) Co-60 are used as active material for such sources. The other type of brachytherapy, the low dose rate (LDR) brachytherapy, implies the insertion of many permanent sources (up to 200) of lower activity. The pulse dose rate (PDR) brachytherapy can be considered as a modification of HDR brachytherapy, when the single source is repeatedly introduced in the tumor region in a pulse regime during several hours. The PDR source activity is of the order of one Ci and the isotope Ir-192 is currently used for these sources. The PDR brachytherapy is well recommended for the treatment of several tumors since, according to oncologists, it combines the medical benefits of both HDR and LDR types of brachytherapy. One of the main problems for the PDR brachytherapy progress is the shielding of the treatment area since the longer stay of patients in a shielded canyon is not enough comfortable for them. The use of Yb-169 as an active source material is the way to resolve the shielding problem for PDR, as well as for HRD brachytherapy. The isotope Yb-169 has the average photon emission energy of 93 KeV and the half-life of 32 days. Compared to iridium and cobalt, this isotope has a significantly lower emission energy and therefore requires a much lighter shielding. Moreover, the absorption cross section of different materials has a strong Z-dependence in that photon energy range. For example, the dose distributions of iridium and ytterbium have a quite similar behavior in the water or in the body. But the heavier material as lead absorbs the ytterbium radiation much stronger than the iridium or cobalt radiation. For example, only 2 mm of lead layer is enough to reduce the ytterbium radiation by a couple of orders of magnitude but is not enough to protect from iridium radiation. We have created an original facility to produce the start stable isotope Yb-168 using the laser technology AVLIS. This facility allows to raise the Yb-168 concentration up to 50 % and consumes much less of electrical power than the alternative electromagnetic enrichment facilities. We also developed, in cooperation with the Institute of high pressure physics of RAS, a new technology for manufacturing high-density ceramic cores of ytterbium oxide. Ceramics density reaches the limit of the theoretical values: 9.1 g/cm3 for the cubic phase of ytterbium oxide and 10 g/cm3 for the monoclinic phase. Source cores from this ceramics have high mechanical characteristics and a glassy surface. The use of ceramics allows to increase the source activity with fixed external dimensions of sources.Keywords: brachytherapy, high, pulse dose rates, radionuclides for therapy, ytterbium sources
Procedia PDF Downloads 4941470 Conflicts of Interest in the Private Sector and the Significance of the Public Interest Test
Authors: Opemiposi Adegbulu
Abstract:
Conflicts of interest is an elusive, diverse and engaging subject, a cross-cutting problem of governance; all levels of governance, ranging from local to global, public to corporate or financial sectors. In all these areas, its mismanagement could lead to the distortion of decision-making processes, corrosion of trust and the weakening of administration. According to Professor Peters, an expert in the area, conflict of interest, a problem at the root of many scandals has “become a pervasive ethical concern in our professional, organisational, and political life”. Conflicts of interest corrode trust, and like in the public sector, trust is mandatory for the market, consumers/clients, shareholders and other stakeholders in the private sector. However, conflicts of interest in the private sector are distinct and must be treated in like manner when regulatory efforts are made to address them. The research looks at identifying conflicts of interest in the private sector and differentiating them from those in the public sector. The public interest is submitted as a criterion which allows for such differentiation. This is significant because it would for the use of tailor-made or sector-specific approaches to addressing this complex issue. This is conducted through extensive review of literature and theories on the definition of conflicts of interest. This study will employ theoretical, doctrinal and comparative methods. The nature of conflicts of interest in the private sector will be explored, through an analysis of the public sector where the notion of conflicts of interest appears more clearly identified, reasons, why they are of business ethics concern, will be advanced, and then, once again, looking at public sector solutions and other solutions, the study will identify ways of mitigating and managing conflicts in the private sector. An exploration of public sector conflicts of interest and solutions will be carried out because the typologies of conflicts of interest in both sectors appear very similar at the core and thus, lessons can be learnt with regards to the management of these issues in the private sector. Conflicts of interest corrode trust, and like in the public sector, trust is mandatory for the market, consumers/clients, shareholders and other stakeholders in the private sector. This research will then focus on some specific challenges to understanding and identifying conflicts of interest in the private sector; origin, diverging theories, the psychological barrier to the definition, similarities with public sector conflicts of interest due to the notions of corrosion of trust, ‘being in a particular kind of situation,’ etc. The notion of public interest will be submitted as a key element at the heart of the distinction between public sector and private sector conflicts of interests. It will then be proposed that the appreciation of the notion of conflicts of interest differ according to sector, country to country, based on the public interest test, using the United Kingdom (UK), the United States of America (US), France and the Philippines as illustrations.Keywords: conflicts of interest, corporate governance, global governance, public interest
Procedia PDF Downloads 4031469 Regulating Transnational Corporations and Protecting Human Rights: Analyzing the Efficiency of International Legal Framework
Authors: Stellina Jolly
Abstract:
July 18th to August 19th 2013 has gone down in the history of India for undertaking the country’s first environment referendum. The Supreme Court had ruled that the Vedanta Group's bauxite mining project in the Niyamgiri Hills of Orissa will have to get clearance from the gram sabha, which will consider the cultural and religious rights of the tribals and forest dwellers living in Rayagada and Kalahandi districts. In the Niyamgiri hills, people of small tribal hamlets were asked to voice their opinion on bauxite mining in their habitat. The ministry has reiterated its stand that mining cannot be allowed on the Niyamgiri hills because it will affect the rights of the Dongria Kondhs. The tribal person who occupies the Niyamgiri Hills in Eastern India accomplished their first success in 2010 in their struggle to protect and preserve their existence, culture and land against Vedanta a London-based mining giant. In August, 2010 Government of India revoked permission for Vedanta Resources to mine bauxite from hills in Orissa State where the Dongria Kondh live as forest dwellers. This came after various protests and reports including amnesty report wherein it highlighted that an alumina refinery in eastern India operated by a subsidiary of mining company. Vedanta was accused of causing air and water pollution that threatens the health of local people and their access to water. The abuse of human rights by corporate is not a new issue it has occurred in Africa, Asia and other parts of the world. Paper focuses on the instances and extent of human right especially in terms of environment violations by corporations. Further Paper details on corporations and sustainable development. Paper finally comes up with certain recommendation including call for a declaration by United Nations on Corporate environment Human Rights Liability.Keywords: environment, corporate, human rights, sustainable development
Procedia PDF Downloads 4771468 A History of Taiwan’s Secret Nuclear Program
Authors: Hsiao-ting Lin
Abstract:
This paper analyzes the history of Taiwan’s secret program to develop its nuclear weapons during the Cold War. In July 1971, US President Richard Nixon shocked the world when he announced that his national security adviser Henry Kissinger had made a secret trip to China and that he himself had accepted an invitation to travel to Beijing. This huge breakthrough in the US-PRC relationship was followed by Taipei’s loss of political legitimacy and international credibility as a result of its UN debacle in the fall that year. Confronted with the Nixon White House’s opening to the PRC, leaders in Taiwan felt being betrayed and abandoned, and they were obliged to take countermeasures for the sake of national interest and regime survival. Taipei’s endeavor to create an effective nuclear program, including the possible development of nuclear weapons capabilities, fully demonstrates the government’s resolution to pursue its own national policy, even if such a policy was guaranteed to undermine its relations with the United States. With hindsight, Taiwan’s attempt to develop its own nuclear weapons did not succeed in sabotaging the warming of US-PRC relations. Worse, it was forced to come to a full stop when, in early 1988, the US government pressured Taipei to close related facilities and programs on the island. However, Taiwan’s abortive attempt to develop its nuclear capability did influence Washington’s and Beijing’s handling of their new relationship. There did develop recognition of a common American and PRC interest in avoiding a nuclearized Taiwan. From this perspective, Beijing’s interests would best be served by allowing the island to remain under loose and relatively benign American influence. As for the top leaders on Taiwan, such a policy choice demonstrated how they perceived the shifting dynamics of international politics in the 1960s and 1970s and how they struggled to break free and pursue their own independent national policy within the rigid framework of the US-Taiwan alliance during the Cold War.Keywords: taiwan, richard nixon, nuclear program, chiang Kai-shek, chiang ching-kuo
Procedia PDF Downloads 1341467 A Fast Optimizer for Large-scale Fulfillment Planning based on Genetic Algorithm
Authors: Choonoh Lee, Seyeon Park, Dongyun Kang, Jaehyeong Choi, Soojee Kim, Younggeun Kim
Abstract:
Market Kurly is the first South Korean online grocery retailer that guarantees same-day, overnight shipping. More than 1.6 million customers place an average of 4.7 million orders and add 3 to 14 products into a cart per month. The company has sold almost 30,000 kinds of various products in the past 6 months, including food items, cosmetics, kitchenware, toys for kids/pets, and even flowers. The company is operating and expanding multiple dry, cold, and frozen fulfillment centers in order to store and ship these products. Due to the scale and complexity of the fulfillment, pick-pack-ship processes are planned and operated in batches, and thus, the planning that decides the batch of the customers’ orders is a critical factor in overall productivity. This paper introduces a metaheuristic optimization method that reduces the complexity of batch processing in a fulfillment center. The method is an iterative genetic algorithm with heuristic creation and evolution strategies; it aims to group similar orders into pick-pack-ship batches to minimize the total number of distinct products. With a well-designed approach to create initial genes, the method produces streamlined plans, up to 13.5% less complex than the actual plans carried out in the company’s fulfillment centers in the previous months. Furthermore, our digital-twin simulations show that the optimized plans can reduce 3% of operation time for packing, which is the most complex and time-consuming task in the process. The optimization method implements a multithreading design on the Spring framework to support the company’s warehouse management systems in near real-time, finding a solution for 4,000 orders within 5 to 7 seconds on an AWS c5.2xlarge instance.Keywords: fulfillment planning, genetic algorithm, online grocery retail, optimization
Procedia PDF Downloads 841466 An Examination of the Moderating Effect of Team Identification on Attitude and Buying Intention of Jersey Sponsorship
Authors: Young Ik Suh, Taewook Chung, Glaucio Scremin, Tywan Martin
Abstract:
In May of 2016, the Philadelphia 76ers announced that StubHub, the ticket resale company, will have advertising on the team’s jerseys beginning in the 2017-18 season. The 76ers and National Basketball Association (NBA) became the first team and league which embraced jersey sponsorships in the four major U.S. professional sports. Even though many professional teams and leagues in Europe, Asia, Africa, and South America have adopted jersey sponsorship actively, this phenomenon is relatively new in America. While the jersey sponsorship provides economic gains for the professional leagues and franchises, sport fans can have different points of view for the phenomenon of jersey sponsorship. For instance, since many sport fans in U.S. are not familiar with ads on jerseys, this movement can possibly cause negative reaction such as the decrease in ticket and merchandise sales. They also concern the small size of ads on jersey become bigger ads, like in the English Premier League (EPL). However, some sport fans seem they do not mind too much about jersey sponsorship because the ads on jersey will not affect their loyalty and fanship. Therefore, the assumption of this study was that the sport fans’ reaction about jersey sponsorship can be possibly different, especially based on different levels of the sport fans’ team identification and various sizes of ads on jersey. Unlike general sponsorship in sport industry, jersey sponsorship has received little attention regarding its potential impact on sport fans attitudes and buying intentions. Thus, the current study sought to identify how the various levels of team identification influence brand attitude and buying intention in terms of jersey sponsorship. In particular, this study examined the effect of team identification on brand attitude and buying intention when there are no ads, small size ads, and large size ads on jersey. 3 (large, small, and no ads) X 3 (Team Identification: high, moderate, low) between subject factorial design was conducted on attitude toward the brand and buying intention of jersey sponsorship. The ads on Philadelphia 76ers jersey were used. The sample of this study was selected from message board users provided by different sports websites (i.e., forums.realgm.com and phillysportscentral.com). A total of 275 respondents participated in this study by responding to an online survey questionnaire. The results showed that there were significant differences between fans with high identification and fans with low identification. The findings of this study are expected to have many theoretical and practical contributions and implications by extending the research and literature pertaining to the relationship between team identification and brand strategy based upon different levels of team identification.Keywords: brand attitude, buying intention, Jersey sponsorship, team identification
Procedia PDF Downloads 2501465 Coupled Space and Time Homogenization of Viscoelastic-Viscoplastic Composites
Authors: Sarra Haouala, Issam Doghri
Abstract:
In this work, a multiscale computational strategy is proposed for the analysis of structures, which are described at a refined level both in space and in time. The proposal is applied to two-phase viscoelastic-viscoplastic (VE-VP) reinforced thermoplastics subjected to large numbers of cycles. The main aim is to predict the effective long time response while reducing the computational cost considerably. The proposed computational framework is a combination of the mean-field space homogenization based on the generalized incrementally affine formulation for VE-VP composites, and the asymptotic time homogenization approach for coupled isotropic VE-VP homogeneous solids under large numbers of cycles. The time homogenization method is based on the definition of micro and macro-chronological time scales, and on asymptotic expansions of the unknown variables. First, the original anisotropic VE-VP initial-boundary value problem of the composite material is decomposed into coupled micro-chronological (fast time scale) and macro-chronological (slow time-scale) problems. The former is purely VE, and solved once for each macro time step, whereas the latter problem is nonlinear and solved iteratively using fully implicit time integration. Second, mean-field space homogenization is used for both micro and macro-chronological problems to determine the micro and macro-chronological effective behavior of the composite material. The response of the matrix material is VE-VP with J2 flow theory assuming small strains. The formulation exploits the return-mapping algorithm for the J2 model, with its two steps: viscoelastic predictor and plastic corrections. The proposal is implemented for an extended Mori-Tanaka scheme, and verified against finite element simulations of representative volume elements, for a number of polymer composite materials subjected to large numbers of cycles.Keywords: asymptotic expansions, cyclic loadings, inclusion-reinforced thermoplastics, mean-field homogenization, time homogenization
Procedia PDF Downloads 3711464 Intrusion Detection in SCADA Systems
Authors: Leandros A. Maglaras, Jianmin Jiang
Abstract:
The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection
Procedia PDF Downloads 5561463 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges
Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch
Abstract:
Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.Keywords: big data interpretation, datathon, systems toxicology, verification
Procedia PDF Downloads 2791462 Mathematical Modeling of the AMCs Cross-Contamination Removal in the FOUPs: Finite Element Formulation and Application in FOUP’s Decontamination
Authors: N. Santatriniaina, J. Deseure, T. Q. Nguyen, H. Fontaine, C. Beitia, L. Rakotomanana
Abstract:
Nowadays, with the increasing of the wafer's size and the decreasing of critical size of integrated circuit manufacturing in modern high-tech, microelectronics industry needs a maximum attention to challenge the contamination control. The move to 300 mm is accompanied by the use of Front Opening Unified Pods for wafer and his storage. In these pods an airborne cross contamination may occur between wafers and the pods. A predictive approach using modeling and computational methods is very powerful method to understand and qualify the AMCs cross contamination processes. This work investigates the required numerical tools which are employed in order to study the AMCs cross-contamination transfer phenomena between wafers and FOUPs. Numerical optimization and finite element formulation in transient analysis were established. Analytical solution of one dimensional problem was developed and the calibration process of physical constants was performed. The least square distance between the model (analytical 1D solution) and the experimental data are minimized. The behavior of the AMCs intransient analysis was determined. The model framework preserves the classical forms of the diffusion and convection-diffusion equations and yields to consistent form of the Fick's law. The adsorption process and the surface roughness effect were also traduced as a boundary condition using the switch condition Dirichlet to Neumann and the interface condition. The methodology is applied, first using the optimization methods with analytical solution to define physical constants, and second using finite element method including adsorption kinetic and the switch of Dirichlet to Neumann condition.Keywords: AMCs, FOUP, cross-contamination, adsorption, diffusion, numerical analysis, wafers, Dirichlet to Neumann, finite elements methods, Fick’s law, optimization
Procedia PDF Downloads 5121461 Value Generation of Construction and Demolition Waste Originated in the Building Rehabilitation to Improve Energy Efficiency; From Waste to Resources
Authors: Mercedes Del Rio Merino, Jaime Santacruz Astorqui, Paola Villoria Saez, Carmen Viñas Arrebola
Abstract:
The lack of treatment of the waste from construction and demolition waste (CDW) is a problem that must be solved immediately. It is estimated that in the world not to use CDW generates an increase in the use of new materials close to 20% of the total value of the materials used. The problem is even greater in case these wastes are considered hazardous because the final deposition of them may also generate significant contamination. Therefore, the possibility of including CDW in the manufacturing of building materials, represents an interesting alternative to ensure their use and to reduce their possible risk. In this context and in the last years, many researches are being carried out in order to analyze the viability of using CDW as a substitute for the traditional raw material of high environmental impact. Even though it is true, much remains to be done, because these works generally characterize materials but not specific applications that allow the agents of the construction to have the guarantees required by the projects. Therefore, it is necessary the involvement of all the actors included in the life cycle of these new construction materials, and also to promote its use for, for example, definition of standards, tax advantages or market intervention is necessary. This paper presents the main findings reached in "Waste to resources (W2R)" project since it began in October 2014. The main goal of the project is to develop new materials, elements and construction systems, manufactured from CDW, to be used in improving the energy efficiency of buildings. Other objectives of the project are: to quantify the CDW generated in the energy rehabilitation works, specifically wastes from the building envelope; and to study the traceability of CDW generated and promote CDW reuse and recycle in order to get close to the life cycle of buildings, generating zero waste and reducing the ecological footprint of the construction sector. This paper determines the most important aspects to consider during the design of new constructive solutions, which improve the energy efficiency of buildings and what materials made with CDW would be the most suitable for that. Also, a survey to select best practices for reducing "close to zero waste" in refurbishment was done. Finally, several pilot rehabilitation works conform the parameters analyzed in the project were selected, in order to apply the results and thus compare the theoretical with reality. Acknowledgements: This research was supported by the Spanish State Secretariat for Research, Development and Innovation of the Ministry of Economy and Competitiveness under "Waste 2 Resources" Project (BIA2013-43061-R).Keywords: building waste, construction and demolition waste, recycling, resources
Procedia PDF Downloads 2521460 Optimizing Residential Housing Renovation Strategies at Territorial Scale: A Data Driven Approach and Insights from the French Context
Authors: Rit M., Girard R., Villot J., Thorel M.
Abstract:
In a scenario of extensive residential housing renovation, stakeholders need models that support decision-making through a deep understanding of the existing building stock and accurate energy demand simulations. To address this need, we have modified an optimization model using open data that enables the study of renovation strategies at both territorial and national scales. This approach provides (1) a definition of a strategy to simplify decision trees from theoretical combinations, (2) input to decision makers on real-world renovation constraints, (3) more reliable identification of energy-saving measures (changes in technology or behaviour), and (4) discrepancies between currently planned and actually achieved strategies. The main contribution of the studies described in this document is the geographic scale: all residential buildings in the areas of interest were modeled and simulated using national data (geometries and attributes). These buildings were then renovated, when necessary, in accordance with the environmental objectives, taking into account the constraints applicable to each territory (number of renovations per year) or at the national level (renovation of thermal deficiencies (Energy Performance Certificates F&G)). This differs from traditional approaches that focus only on a few buildings or archetypes. This model can also be used to analyze the evolution of a building stock as a whole, as it can take into account both the construction of new buildings and their demolition or sale. Using specific case studies of French territories, this paper highlights a significant discrepancy between the strategies currently advocated by decision-makers and those proposed by our optimization model. This discrepancy is particularly evident in critical metrics such as the relationship between the number of renovations per year and achievable climate targets or the financial support currently available to households and the remaining costs. In addition, users are free to seek optimizations for their building stock across a range of different metrics (e.g., financial, energy, environmental, or life cycle analysis). These results are a clear call to re-evaluate existing renovation strategies and take a more nuanced and customized approach. As the climate crisis moves inexorably forward, harnessing the potential of advanced technologies and data-driven methodologies is imperative.Keywords: residential housing renovation, MILP, energy demand simulations, data-driven methodology
Procedia PDF Downloads 691459 Evidence-Based Practices in Education: A General Review of the Literature on Elementary Classroom Setting
Authors: Carolina S. Correia, Thalita V. Thomé, Andersen Boniolo, Dhayana I. Veiga
Abstract:
Evidence-based practices (EBP) in education is a set of principles and practices used to raise educational policy, it involves the integration of professional expertise in education with the best empirical evidence in making decisions about how to deliver instruction. The purpose of this presentation is to describe and characterize studies about EBP in education in elementary classroom setting. Data here presented is part of an ongoing systematic review research. Articles were searched and selected from four academic databases: ProQuest, Scielo, Science Direct and Capes. The search terms were evidence-based practices or program effectiveness, and education or teaching or teaching practices or teaching methods. Articles were included according to the following criteria: The studies were explicitly described as evidence-based or discussed the most effective practices in education, they discussed teaching practices in classroom context in elementary school level. Document excerpts were extracted and recorded in Excel, organized by reference, descriptors, abstract, purpose, setting, participants, type of teaching practice, study design and main results. The total amount of articles selected were 1.185, 569 articles from Proquest Research Library; 216 from CAPES; 251 from ScienceDirect and 149 from Scielo Library. The potentially relevant references were 178, from which duplicates were removed. The final number of articles analyzed was 140. From 140 articles, are 47 theoretical studies and 93 empirical articles. The following research design methods were identified: longitudinal intervention study, cluster-randomized trial, meta-analysis and pretest-posttest studies. From 140 articles, 103 studies were about regular school teaching and 37 were on special education teaching practices. In several studies, used as teaching method: active learning, content acquisition podcast (CAP), precision teaching (PT), mediated reading practice, speech therapist programs and peer-assisted learning strategies (PALS). The countries of origin of the studies were United States of America, United Kingdom, Panama, Sweden, Scotland, South Korea, Argentina, Chile, New Zealand and Brunei. The present study in is an ongoing project, so some representative findings will be discussed, providing further acknowledgment on the best teaching practices in elementary classroom setting.Keywords: best practices, children, evidence-based education, elementary school, teaching methods
Procedia PDF Downloads 3351458 On the Semantics and Pragmatics of 'Be Able To': Modality and Actualisation
Authors: Benoît Leclercq, Ilse Depraetere
Abstract:
The goal of this presentation is to shed new light on the semantics and pragmatics of be able to. It presents the results of a corpus analysis based on data from the BNC (British National Corpus), and discusses these results in light of a specific stance on the semantics-pragmatics interface taking into account recent developments. Be able to is often discussed in relation to can and could, all of which can be used to express ability. Such an onomasiological approach often results in the identification of usage constraints for each expression. In the case of be able to, it is the formal properties of the modal expression (unlike can and could, be able to has non-finite forms) that are in the foreground, and the modal expression is described as the verb that conveys future ability. Be able to is also argued to expressed actualised ability in the past (I was able/could to open the door). This presentation aims to provide a more accurate pragmatic-semantic profile of be able to, based on extensive data analysis and one that is embedded in a very explicit view on the semantics-pragmatics interface. A random sample of 3000 examples (1000 for each modal verb) extracted from the BNC was analysed to account for the following issues. First, the challenge is to identify the exact semantic range of be able to. The results show that, contrary to general assumption, be able to does not only express ability but it shares most of the root meanings usually associated with the possibility modals can and could. The data reveal that what is called opportunity is, in fact, the most frequent meaning of be able to. Second, attention will be given to the notion of actualisation. It is commonly argued that be able to is the preferred form when the residue actualises: (1) The only reason he was able to do that was because of the restriction (BNC, spoken) (2) It is only through my imaginative shuffling of the aces that we are able to stay ahead of the pack. (BNC, written) Although this notion has been studied in detail within formal semantic approaches, empirical data is crucially lacking and it is unclear whether actualisation constitutes a conventional (and distinguishing) property of be able to. The empirical analysis provides solid evidence that actualisation is indeed a conventional feature of the modal. Furthermore, the dataset reveals that be able to expresses actualised 'opportunities' and not actualised 'abilities'. In the final part of this paper, attention will be given to the theoretical implications of the empirical findings, and in particular to the following paradox: how can the same expression encode both modal meaning (non-factual) and actualisation (factual)? It will be argued that this largely depends on one's conception of the semantics-pragmatics interface, and that this need not be an issue when actualisation (unlike modality) is analysed as a generalised conversational implicature and thus is considered part of the conventional pragmatic layer of be able to.Keywords: Actualisation, Modality, Pragmatics, Semantics
Procedia PDF Downloads 1331457 A Comparative Study of Regional Climate Models and Global Coupled Models over Uttarakhand
Authors: Sudip Kumar Kundu, Charu Singh
Abstract:
As a great physiographic divide, the Himalayas affecting a large system of water and air circulation which helps to determine the climatic condition in the Indian subcontinent to the south and mid-Asian highlands to the north. It creates obstacles by defending chill continental air from north side into India in winter and also defends rain-bearing southwesterly monsoon to give up maximum precipitation in that area in monsoon season. Nowadays extreme weather conditions such as heavy precipitation, cloudburst, flash flood, landslide and extreme avalanches are the regular happening incidents in the region of North Western Himalayan (NWH). The present study has been planned to investigate the suitable model(s) to find out the rainfall pattern over that region. For this investigation, selected models from Coordinated Regional Climate Downscaling Experiment (CORDEX) and Coupled Model Intercomparison Project Phase 5 (CMIP5) has been utilized in a consistent framework for the period of 1976 to 2000 (historical). The ability of these driving models from CORDEX domain and CMIP5 has been examined according to their capability of the spatial distribution as well as time series plot of rainfall over NWH in the rainy season and compared with the ground-based Indian Meteorological Department (IMD) gridded rainfall data set. It is noted from the analysis that the models like MIROC5 and MPI-ESM-LR from the both CORDEX and CMIP5 provide the best spatial distribution of rainfall over NWH region. But the driving models from CORDEX underestimates the daily rainfall amount as compared to CMIP5 driving models as it is unable to capture daily rainfall data properly when it has been plotted for time series (TS) individually for the state of Uttarakhand (UK) and Himachal Pradesh (HP). So finally it can be said that the driving models from CMIP5 are better than CORDEX domain models to investigate the rainfall pattern over NWH region.Keywords: global warming, rainfall, CMIP5, CORDEX, NWH
Procedia PDF Downloads 1691456 Estimating the Traffic Impacts of Green Light Optimal Speed Advisory Systems Using Microsimulation
Authors: C. B. Masera, M. Imprialou, L. Budd, C. Morton
Abstract:
Even though signalised intersections are necessary for urban road traffic management, they can act as bottlenecks and disrupt traffic operations. Interrupted traffic flow causes congestion, delays, stop-and-go conditions (i.e. excessive acceleration/deceleration) and longer journey times. Vehicle and infrastructure connectivity offers the potential to provide improved new services with additional functions of assisting drivers. This paper focuses on one of the applications of vehicle-to-infrastructure communication namely Green Light Optimal Speed Advisory (GLOSA). To assess the effectiveness of GLOSA in the urban road network, an integrated microscopic traffic simulation framework is built into VISSIM software. Vehicle movements and vehicle-infrastructure communications are simulated through the interface of External Driver Model. A control algorithm is developed for recommending an optimal speed that is continuously updated in every time step for all vehicles approaching a signal-controlled point. This algorithm allows vehicles to pass a traffic signal without stopping or to minimise stopping times at a red phase. This study is performed with all connected vehicles at 100% penetration rate. Conventional vehicles are also simulated in the same network as a reference. A straight road segment composed of two opposite directions with two traffic lights per lane is studied. The simulation is implemented under 150 vehicles per hour and 200 per hour traffic volume conditions to identify how different traffic densities influence the benefits of GLOSA. The results indicate that traffic flow is improved by the application of GLOSA. According to this study, vehicles passed through the traffic lights more smoothly, and waiting times were reduced by up to 28 seconds. Average delays decreased for the entire network by 86.46% and 83.84% under traffic densities of 150 vehicles per hour per lane and 200 vehicles per hour per lane, respectively.Keywords: connected vehicles, GLOSA, intelligent transport systems, vehicle-to-infrastructure communication
Procedia PDF Downloads 1741455 The TarMed Reform of 2014: A Causal Analysis of the Effects on the Behavior of Swiss Physicians
Authors: Camila Plaza, Stefan Felder
Abstract:
In October 2014, the TARMED reform was implemented in Switzerland. In an effort to even out the financial standing of general practitioners (including pediatricians) relative to that of specialists in the outpatient sector, the reform tackled two aspects: on the one hand, GPs would be able to bill an additional 9 CHF per patient, once per consult per day. This is referred to as the surcharge position. As a second measure, it reduced the fees for certain technical services targeted to specialists (e.g., imaging, surgical technical procedures, etc.). Given the fee-for-service reimbursement system in Switzerland, we predict that physicians reacted to the economic incentives of the reform by increasing the consults per patient and decreasing the average amount of time per consult. Within this framework, our treatment group is formed by GPs and our control group by those specialists who were not affected by the reform. Using monthly insurance claims panel data aggregated at the physician praxis level (provided by SASIS AG), for the period of January 2013-December 2015, we run difference in difference panel data models with physician and time fixed effects in order to test for the causal effects of the reform. We account for seasonality, and control for physician characteristics such as age, gender, specialty, and physician experience. Furthermore, we run the models on subgroups of physicians within our sample so as to account for heterogeneity and treatment intensities. Preliminary results support our hypothesis. We find evidence of an increase in consults per patients and a decrease in time per consult. Robustness checks do not significantly alter the results for our outcome variable of consults per patient. However, we do find a smaller effect of the reform for time per consult. Thus, the results of this paper could provide policymakers a better understanding of physician behavior and their sensitivity to financial incentives of reforms (both past and future) under the current reimbursement system.Keywords: difference in differences, financial incentives, health reform, physician behavior
Procedia PDF Downloads 1291454 Legal Aspects in Character Merchandising with Reference to Right to Image of Celebrities
Authors: W. R. M. Shehani Shanika
Abstract:
Selling goods and services using images, names and personalities of celebrities has become a common marketing strategy identified in modern physical and online markets. Two concepts called globalization and open economy have given numerous reasons to develop businesses to earn higher profits. Therefore, global market plus domestic markets in various countries have vigorously endorsing images of famous sport stars, film stars, singing stars and cartoon characters for the purpose of increasing demand for goods and services rendered by them. It has been evident that these trade strategies have become a threat to famous personalities in financially and personally. Right to the image is a basic human right which celebrities owned to avoid themselves from various commercial exploitations. In this respect, this paper aims to assess whether the law relating to character merchandising satisfactorily protects right to image of celebrities. However, celebrities can decide how much they receive for each representation to the general public. Simply they have exclusive right to decide monetary value for their image. But most commonly every country uses law relating to unfair competition to regulate matters arise thereof. Legal norms in unfair competition are not enough to protect image of celebrities. Therefore, celebrities must be able to avoid unauthorized use of their images for commercial purposes by fraudulent traders and getting unjustly enriched, as their images have economic value. They have the right for use their image for any commercial purpose and earn profits. Therefore it is high time to recognize right to image as a new dimension to be protected in the legal framework of character merchandising. Unfortunately, to the author’s best knowledge there are no any uniform, single international standard which recognizes right to the image of celebrities in the context of character merchandising. The paper identifies it as a controversial legal barrier faced by celebrities in the rapidly evolving marketplace. Finally, this library-based research concludes with proposals to ensure the right to image more broadly in the legal context of character merchandising.Keywords: brand endorsement, celebrity, character merchandising, intellectual property rights, right to image, unfair competition
Procedia PDF Downloads 1391453 The Effects of the Parent Training Program for Obesity Reduction on Child Waist Circumference and Health Behaviors of Pre-School Children at the Samut-Songkhram Kindergarten School, Samut-Songkhram Province, Thailand
Authors: Muntanavadee Maytapattana
Abstract:
This research aims to study the effects of the Parent Training Program for Obesity Reduction (PTPOR) on child waist circumference and health behaviors of pre-school children at the Samut-Songkhram kindergarten school, Samut-Songkhram province, Thailand. The objective of this research is to evaluate the effectiveness of the PTPOR on child waist circumference and health behaviors of the pre-school children. The conceptual framework of this study is developed on the basis of the Ecological Systems Theory (EST), not only do the individual factors such as child characteristics and child risk factors contribute to the child’s weight status, but also other factors such as parenting style and family characteristics, as well as community and demographic factors. This research is a quasi-experimental study. Participants were pre-school overweight and obese children and their parents. Forty-one parent-child dyads were recruited into the program. Parents participated in two sessions including an educational session and a group discussion session. Research methodology uses Paired-Samples t-test to determine the difference between groups in the mean scores of the outcome variables of the children and parents. The research results show that there was significant difference between child waist circumferences mean score at the baseline and finishing the program at the 0.01 level (p = 0.001), mean score of the child waist circumference was decrease after finishing the program. And there was no significant difference between child exercise health behaviors mean score at the baseline and finishing the program at the 0.05 level; however, mean score of the child exercise behavior was increase after finishing the program. Meanwhile, there was significant difference between child dietary health behavior mean score at the baseline and finishing the program at the 0.01 level (p = 0.001), mean score of the child dietary was increase after finishing the program.Keywords: PTPOR, child waist circumference, child health behaviors, pre-school children
Procedia PDF Downloads 5731452 Virtual Reality for Chemical Engineering Unit Operations
Authors: Swee Kun Yap, Sachin Jangam, Suraj Vasudevan
Abstract:
Experiential learning is dubbed as a highly effective way to enhance learning. Virtual reality (VR) is thus a helpful tool in providing a safe, memorable, and interactive learning environment. A class of 49 fluid mechanics students participated in starting up a pump, one of the most used equipment in the chemical industry, in VR. They experience the process in VR to familiarize themselves with the safety training and the standard operating procedure (SOP) in guided mode. Students subsequently observe their peers (in groups of 4 to 5) complete the same training. The training first brings each user through the personal protection equipment (PPE) selection, before guiding the user through a series of steps for pump startup. One of the most common feedback given by industries include the weakness of our graduates in pump design and operation. Traditional fluid mechanics is a highly theoretical module loaded with engineering equations, providing limited opportunity for visualization and operation. With VR pump, students can now learn to startup, shutdown, troubleshoot and observe the intricacies of a centrifugal pump in a safe and controlled environment, thereby bridging the gap between theory and practical application. Following the completion of the guided mode operation, students then individually complete the VR assessment for pump startup on the same day, which requires students to complete the same series of steps, without any cues given in VR to test their recollection rate. While most students miss out a few minor steps such as the checking of lubrication oil and the closing of minor drain valves before pump priming, all the students scored full marks in the PPE selection, and over 80% of the students were able to complete all the critical steps that are required to startup a pump safely. The students were subsequently tested for their recollection rate by means of an online quiz 3 weeks later, and it is again found that over 80% of the students were able to complete the critical steps in the correct order. In the survey conducted, students reported that the VR experience has been enjoyable and enriching, and 79.5% of the students voted to include VR as a positive supplementary exercise in addition to traditional teaching methods. One of the more notable feedback is the higher ease of noticing and learning from mistakes as an observer rather than as a VR participant. Thus, the cycling between being a VR participant and an observer has helped tremendously in their knowledge retention. This reinforces the positive impact VR has on learning.Keywords: experiential learning, learning by doing, pump, unit operations, virtual reality
Procedia PDF Downloads 1401451 Volunteers’ Preparedness for Natural Disasters and EVANDE Project
Authors: A. Kourou, A. Ioakeimidou, E. Bafa, C. Fassoulas, M. Panoutsopoulou
Abstract:
The role of volunteers in disaster management is of decisive importance and the need of their involvement is well recognized, both for prevention measures and for disaster management. During major catastrophes, whereas professional personnel are outsourced, the role of volunteers is crucial. In Greece experience has shown that various groups operating in the civil protection mechanism like local administration staff or volunteers, in many cases do not have the necessary knowledge and information on best practices to act against natural disasters. One of the major problems is the lack of volunteers’ education and training. In the above given framework, this paper presents the results of a survey aimed to identify the level of education and preparedness of civil protection volunteers in Greece. Furthermore, the implementation of earthquake protection measures at individual, family and working level, are explored. More specifically, the survey questionnaire investigates issues regarding pre-earthquake protection actions, appropriate attitudes and behaviors during an earthquake and existence of contingency plans in the workplace. The questionnaires were administered to citizens from different regions of the country and who attend the civil protection training program: “Protect Myself and Others”. A closed-form questionnaire was developed for the survey, which contained questions regarding the following: a) knowledge of self-protective actions; b) existence of emergency planning at home; c) existence of emergency planning at workplace (hazard mitigation actions, evacuation plan, and performance of drills); and, d) respondents` perception about their level of earthquake preparedness. The results revealed a serious lack of knowledge and preparedness among respondents. Taking into consideration the aforementioned gap and in order to raise awareness and improve preparedness and effective response of volunteers acting in civil protection, the EVANDE project was submitted and approved by the European Commission (EC). The aim of that project is to educate and train civil protection volunteers on the most serious natural disasters, such as forest fires, floods, and earthquakes, and thus, increase their performance.Keywords: civil protection, earthquake, preparedness, volunteers
Procedia PDF Downloads 2441450 Remote Sensing of Aerated Flows at Large Dams: Proof of Concept
Authors: Ahmed El Naggar, Homyan Saleh
Abstract:
Dams are crucial for flood control, water supply, and the creation of hydroelectric power. Every dam has a water conveyance system, such as a spillway, providing the safe discharge of catastrophic floods when necessary. Spillway design has historically been investigated in laboratory research owing to the absence of suitable full-scale flow monitoring equipment and safety problems. Prototype measurements of aerated flows are urgently needed to quantify projected scale effects and provide missing validation data for design guidelines and numerical simulations. In this work, an image-based investigation of free-surface flows on a tiered spillway was undertaken at the laboratory (fixed camera installation) and prototype size (drone video) (drone footage) (drone footage). The drone videos were generated using data from citizen science. Analyses permitted the measurement of the free-surface aeration inception point, air-water surface velocities, fluctuations, and residual energy at the chute's downstream end from a remote site. The prototype observations offered full-scale proof of concept, while laboratory results were efficiently confirmed against invasive phase-detection probe data. This paper stresses the efficacy of image-based analyses at prototype spillways. It highlights how citizen science data may enable academics better understand real-world air-water flow dynamics and offers a framework for a small collection of long-missing prototype data.Keywords: remote sensing, aerated flows, large dams, proof of concept, dam spillways, air-water flows, prototype operation, remote sensing, inception point, optical flow, turbulence, residual energy
Procedia PDF Downloads 941449 Integrated Risk Assessment of Storm Surge and Climate Change for the Coastal Infrastructure
Authors: Sergey V. Vinogradov
Abstract:
Coastal communities are presently facing increased vulnerabilities due to rising sea levels and shifts in global climate patterns, a trend expected to escalate in the long run. To address the needs of government entities, the public sector, and private enterprises, there is an urgent need to thoroughly investigate, assess, and manage the present and projected risks associated with coastal flooding, including storm surges, sea level rise, and nuisance flooding. In response to these challenges, a practical approach to evaluating storm surge inundation risks has been developed. This methodology offers an integrated assessment of potential flood risk in targeted coastal areas. The physical modeling framework involves simulating synthetic storms and utilizing hydrodynamic models that align with projected future climate and ocean conditions. Both publicly available and site-specific data form the basis for a risk assessment methodology designed to translate inundation model outputs into statistically significant projections of expected financial and operational consequences. This integrated approach produces measurable indicators of impacts stemming from floods, encompassing economic and other dimensions. By establishing connections between the frequency of modeled flood events and their consequences across a spectrum of potential future climate conditions, our methodology generates probabilistic risk assessments. These assessments not only account for future uncertainty but also yield comparable metrics, such as expected annual losses for each inundation event. These metrics furnish stakeholders with a dependable dataset to guide strategic planning and inform investments in mitigation. Importantly, the model's adaptability ensures its relevance across diverse coastal environments, even in instances where site-specific data for analysis may be limited.Keywords: climate, coastal, surge, risk
Procedia PDF Downloads 581448 Communication Anxiety in Nigerian Students Studying English as a Foreign Language: Evidence from Colleges of Education Sector
Authors: Yasàlu Haruna
Abstract:
In every transaction, the use of language is central regardless of form or complexity if any meaning is expected to be harvested therefrom. Students constituting a population group in the learning landscape of Nigeria occupy a central position with a propensity to excel or otherwise in the context of communication, especially in the learning process and social interaction. The nature or quantum of anxiety or confidence in speaking a second language is not only peculiar to societies where the second language is not an official language but to a degree, the linguistic gap created by adoption and adaptation syndrome manifests in created anxiety or lack of confidence especially where mastery of a spoken language becomes a major challenge. This paper explores the manner in which linguistic complexity and cultural barriers combine to widen the adaptation and adoption gap. In much the same way, typical issues of pronouncement, intonation and accent difficulties are vital variables that explain the root cause of anxiety. Using a combination of primary and secondary sources of data expressed in questionnaires, key informant interviews and other available data, the paper concludes that the non-integration of anxiety possibility into the education delivery framework has left a lot to be needed in cultivating second language speakers among students of Nigerian Colleges of Education. In addition, cultural barriers and the absence of integration interfaces in the course of learning within and outside the classroom contribute to further widening the gap. Again, colleagues/mates/conversation partners' mastery of a second language remains a contributory factor largely due to the quality of the preparatory school system in many parts of the country. The paper recommends that national policies and frameworks must be reviewed to consider integration windows where culture and conversation partner deficiencies can be remedied through educational events such as debates, quizzes and symposia; improvements can be attained while commercial advertisements are tailored towards seeking for adoption of second language in commerce and major cultural activities.Keywords: cultural barriers, integration, college of education and adaptation, second language
Procedia PDF Downloads 95