Search results for: hydrologic modeling system
8574 Accelerating Side Channel Analysis with Distributed and Parallelized Processing
Authors: Kyunghee Oh, Dooho Choi
Abstract:
Although there is no theoretical weakness in a cryptographic algorithm, Side Channel Analysis can find out some secret data from the physical implementation of a cryptosystem. The analysis is based on extra information such as timing information, power consumption, electromagnetic leaks or even sound which can be exploited to break the system. Differential Power Analysis is one of the most popular analyses, as computing the statistical correlations of the secret keys and power consumptions. It is usually necessary to calculate huge data and takes a long time. It may take several weeks for some devices with countermeasures. We suggest and evaluate the methods to shorten the time to analyze cryptosystems. Our methods include distributed computing and parallelized processing.Keywords: DPA, distributed computing, parallelized processing, side channel analysis
Procedia PDF Downloads 4288573 Autonomous Quantum Competitive Learning
Authors: Mohammed A. Zidan, Alaa Sagheer, Nasser Metwally
Abstract:
Real-time learning is an important goal that most of artificial intelligence researches try to achieve it. There are a lot of problems and applications which require low cost learning such as learn a robot to be able to classify and recognize patterns in real time and real-time recall. In this contribution, we suggest a model of quantum competitive learning based on a series of quantum gates and additional operator. The proposed model enables to recognize any incomplete patterns, where we can increase the probability of recognizing the pattern at the expense of the undesired ones. Moreover, these undesired ones could be utilized as new patterns for the system. The proposed model is much better compared with classical approaches and more powerful than the current quantum competitive learning approaches.Keywords: competitive learning, quantum gates, quantum gates, winner-take-all
Procedia PDF Downloads 4728572 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety
Authors: Neeti Nayak, Khalid Duri
Abstract:
Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design
Procedia PDF Downloads 1098571 Special Properties of the Zeros of the Analytic Representations of Finite Quantum Systems
Authors: Muna Tabuni
Abstract:
The paper contains an investigation on the special properties of the zeros of the analytic representations of finite quantum systems. These zeros and their paths completely define the finite quantum system. The present paper studies the construction of the analytic representation from its zeros. The analytic functions of finite quantum systems are introduced. The zeros of the analytic theta functions and their paths have been studied. The analytic function f(z) have exactly d zeros. The analytic function has been constructed from its zeros.Keywords: construction, analytic, representation, zeros
Procedia PDF Downloads 2078570 Comparison Analysis of Multi-Channel Echo Cancellation Using Adaptive Filters
Authors: Sahar Mobeen, Anam Rafique, Irum Baig
Abstract:
Acoustic echo cancellation in multichannel is a system identification application. In real time environment, signal changes very rapidly which required adaptive algorithms such as Least Mean Square (LMS), Leaky Least Mean Square (LLMS), Normalized Least Mean square (NLMS) and average (AFA) having high convergence rate and stable. LMS and NLMS are widely used adaptive algorithm due to less computational complexity and AFA used of its high convergence rate. This research is based on comparison of acoustic echo (generated in a room) cancellation thorough LMS, LLMS, NLMS, AFA and newly proposed average normalized leaky least mean square (ANLLMS) adaptive filters.Keywords: LMS, LLMS, NLMS, AFA, ANLLMS
Procedia PDF Downloads 5668569 The Instrumentalization of Digital Media in the Context of Sexualized Violence
Authors: Katharina Kargel, Frederic Vobbe
Abstract:
Sexual online grooming is generally defined as digital interactions for the purpose of sexual exploitation of children or minors, i.e. as a process for preparing and framing sexual child abuse. Due to its conceptual history, sexual online grooming is often associated with perpetrators who are previously unknown to those affected. While the strategies of perpetrators and the perception of those affected are increasingly being investigated, the instrumentalisation of digital media has not yet been researched much. Therefore, the present paper aims at contributing to this research gap by examining in what kind of ways perpetrators instrumentalise digital media. Our analyses draw on 46 case documentations and 18 interviews with those affected. The cases and the partly narrative interviews were collected by ten cooperating specialist centers working on sexualized violence in childhood and youth. For this purpose, we designed a documentation grid allowing for a detailed case reconstruction i.e. including information on the violence, digital media use and those affected. By using Reflexive Grounded Theory, our analyses emphasize a) the subjective benchmark of professional practitioners as well as those affected and b) the interpretative implications resulting from our researchers’ subjective and emotional interaction with the data material. It should first be noted that sexualized online grooming can result in both online and offline sexualized violence as well as hybrid forms. Furthermore, the perpetrators either come from the immediate social environment of those affected or are unknown to them. The perpetrator-victim relationship plays a more important role with regard to the question of the instrumentalisation of digital media than the question of the space (on vs. off) in which the primary violence is committed. Perpetrators unknown to those affected instrumentalise digital media primarily to establish a sexualized system of norms, which is usually embedded in a supposed love relationship. In some cases, after an initial exchange of sexualized images or video recordings, a latent play on the position of power takes place. In the course of the grooming process, perpetrators from the immediate social environment increasingly instrumentalise digital media to establish an explicit relationship of power and dependence, which is directly determined by coercion, threats and blackmail. The knowledge of possible vulnerabilities is strategically used in the course of maintaining contact. The above explanations lead to the conclusion that the motive for the crime plays an essential role in the question of the instrumentalisation of digital media. It is therefore not surprising that it is mostly the near-field perpetrators without commercial motives who initiate a spiral of violence and stress by digitally distributing sexualized (violent) images and video recordings within the reference system of those affected.Keywords: sexualized violence, children and youth, grooming, offender strategies, digital media
Procedia PDF Downloads 1848568 Sampling and Chemical Characterization of Particulate Matter in a Platinum Mine
Authors: Juergen Orasche, Vesta Kohlmeier, George C. Dragan, Gert Jakobi, Patricia Forbes, Ralf Zimmermann
Abstract:
Underground mining poses a difficult environment for both man and machines. At more than 1000 meters underneath the surface of the earth, ores and other mineral resources are still gained by conventional and motorised mining. Adding to the hazards caused by blasting and stone-chipping, the working conditions are best described by the high temperatures of 35-40°C and high humidity, at low air exchange rates. Separate ventilation shafts lead fresh air into a mine and others lead expended air back to the surface. This is essential for humans and machines working deep underground. Nevertheless, mines are widely ramified. Thus the air flow rate at the far end of a tunnel is sensed to be close to zero. In recent years, conventional mining was supplemented by mining with heavy diesel machines. These very flat machines called Load Haul Dump (LHD) vehicles accelerate and ease work in areas favourable for heavy machines. On the other hand, they emit non-filtered diesel exhaust, which constitutes an occupational hazard for the miners. Combined with a low air exchange, high humidity and inorganic dust from the mining it leads to 'black smog' underneath the earth. This work focuses on the air quality in mines employing LHDs. Therefore we performed personal sampling (samplers worn by miners during their work), stationary sampling and aethalometer (Microaeth MA200, Aethlabs) measurements in a platinum mine in around 1000 meters under the earth’s surface. We compared areas of high diesel exhaust emission with areas of conventional mining where no diesel machines were operated. For a better assessment of health risks caused by air pollution we applied a separated gas-/particle-sampling tool (or system), with first denuder section collecting intermediate VOCs. These multi-channel silicone rubber denuders are able to trap IVOCs while allowing particles ranged from 10 nm to 1 µm in diameter to be transmitted with an efficiency of nearly 100%. The second section is represented by a quartz fibre filter collecting particles and adsorbed semi-volatile organic compounds (SVOC). The third part is a graphitized carbon black adsorber – collecting the SVOCs that evaporate from the filter. The compounds collected on these three sections were analyzed in our labs with different thermal desorption techniques coupled with gas chromatography and mass spectrometry (GC-MS). VOCs and IVOCs were measured with a Shimadzu Thermal Desorption Unit (TD20, Shimadzu, Japan) coupled to a GCMS-System QP 2010 Ultra with a quadrupole mass spectrometer (Shimadzu). The GC was equipped with a 30m, BP-20 wax column (0.25mm ID, 0.25µm film) from SGE (Australia). Filters were analyzed with In-situ derivatization thermal desorption gas chromatography time-of-flight-mass spectrometry (IDTD-GC-TOF-MS). The IDTD unit is a modified GL sciences Optic 3 system (GL Sciences, Netherlands). The results showed black carbon concentrations measured with the portable aethalometers up to several mg per m³. The organic chemistry was dominated by very high concentrations of alkanes. Typical diesel engine exhaust markers like alkylated polycyclic aromatic hydrocarbons were detected as well as typical lubrication oil markers like hopanes.Keywords: diesel emission, personal sampling, aethalometer, mining
Procedia PDF Downloads 1578567 Properties of Cement Pastes with Different Particle Size Fractions of Metakaolin
Authors: M. Boháč, R. Novotný, F. Frajkorová, R. S. Yadav, T. Opravil, M. Palou
Abstract:
Properties of Portland cement mixtures with various fractions of metakaolin were studied. 10 % of Portland cement CEM I 42.5 R was replaced by different fractions of high reactivity metakaolin with defined chemical and mineralogical properties. Various fractions of metakaolin were prepared by jet mill classifying system. There is a clear trend between fineness of metakaolin and hydration heat development. Due to metakaolin presence in mixtures the compressive strength development of mortars is rather slower for coarser fractions but 28-day flexural strengths are improved for all fractions of metakaoline used in mixtures compared to reference sample of pure Portland cement. Yield point, plastic viscosity and adhesion of fresh pastes are considerably influenced by fineness of metakaolin used in cement pastes.Keywords: calorimetry, cement, metakaolin fineness, rheology, strength
Procedia PDF Downloads 4148566 The Role of Group Size, Public Employees’ Wages and Control Corruption Institutions in a Game-Theoretical Model of Public Corruption
Authors: Pablo J. Valverde, Jaime E. Fernandez
Abstract:
This paper shows under which conditions public corruption can emerge. The theoretical model includes variables such as the public employee wage (w), a control corruption parameter (c), and the group size of interactions (GS) between clusters of public officers and contractors. The system behavior is analyzed using phase diagrams based on combinations of such parameters (c, w, GS). Numerical simulations are implemented in order to contrast analytic results based on Nash equilibria of the theoretical model. Major findings include the functional relationship between wages and network topology, which attempts to reduce the emergence of corrupt behavior.Keywords: public corruption, game theory, complex systems, Nash equilibrium.
Procedia PDF Downloads 2428565 A 'Four Method Framework' for Fighting Software Architecture Erosion
Authors: Sundus Ayyaz, Saad Rehman, Usman Qamar
Abstract:
Software Architecture is the basic structure of software that states the development and advancement of a software system. Software architecture is also considered as a significant tool for the construction of high quality software systems. A clean design leads to the control, value and beauty of software resulting in its longer life while a bad design is the cause of architectural erosion where a software evolution completely fails. This paper discusses the occurrence of software architecture erosion and presents a set of methods for the detection, declaration and prevention of architecture erosion. The causes and symptoms of architecture erosion are observed with the examples of prescriptive and descriptive architectures and the practices used to stop this erosion are also discussed by considering different types of software erosion and their affects. Consequently finding and devising the most suitable approach for fighting software architecture erosion and in some way reducing its affect is evaluated and tested on different scenarios.Keywords: software architecture, architecture erosion, prescriptive architecture, descriptive architecture
Procedia PDF Downloads 5008564 Electronic and Computer-Assisted Refreshable Braille Display Developed for Visually Impaired Individuals
Authors: Ayşe Eldem, Fatih Başçiftçi
Abstract:
Braille alphabet is an important tool that enables visually impaired individuals to have a comfortable life like those who have normal vision. For this reason, new applications related to the Braille alphabet are being developed. In this study, a new Refreshable Braille Display was developed to help visually impaired individuals learn the Braille alphabet easier. By means of this system, any text downloaded on a computer can be read by the visually impaired individual at that moment by feeling it by his/her hands. Through this electronic device, it was aimed to make learning the Braille alphabet easier for visually impaired individuals with whom the necessary tests were conducted.Keywords: visually impaired individual, Braille, Braille display, refreshable Braille display, USB
Procedia PDF Downloads 3458563 Spline Solution of Singularly Perturbed Boundary Value Problems
Authors: Reza Mohammadi
Abstract:
Using quartic spline, we develop a method for numerical solution of singularly perturbed two-point boundary-value problems. The purposed method is fourth-order accurate and applicable to problems both in singular and non-singular cases. The convergence analysis of the method is given. The resulting linear system of equations has been solved by using a tri-diagonal solver. We applied the presented method to test problems which have been solved by other existing methods in references, for comparison of presented method with the existing methods. Numerical results are given to illustrate the efficiency of our methods.Keywords: second-order ordinary differential equation, singularly-perturbed, quartic spline, convergence analysis
Procedia PDF Downloads 2958562 Effect of Self-Lubricating Carbon Materials on the Tribological Performance of Ultra-High Molecular Weight Polyethylene
Authors: Nayeli Camacho, Fernanda Lara-Perez, Carolina Ortega-Portilla, Diego G. Espinosa-Arbelaez, Juan M. Alvarado-Orozco, Guillermo C. Mondragon-Rodriguez
Abstract:
Ultra-high molecular weight polyethylene (UHMWPE) has been the gold standard material for total knee replacements for almost five decades. Wear damage to UHMWPE articulating surface is inevitable due to the natural sliding and rolling movements of the knee. This generates a considerable amount of wear debris, which results in mechanical instability of the joint, reduces joint mobility, increases pain with detrimental biologic responses, and causes component loosening. The presence of wear particles has been closely related to adverse reactions in the knee joint surrounding tissue, especially for particles in the range of 0.3 to 2 μm. Carbon-based materials possess excellent mechanical properties and have shown great promise in tribological applications. In this study, diamond-like carbon coatings (DLC) and carbon nanotubes (CNTs) were used to decrease the wear rate of ultra-high molecular weight polyethylene. A titanium doped DLC (Ti-DLC) was deposited by magnetron sputtering on stainless steel precision spheres while CNTs were used as a second phase reinforcement in UHMWPE at a concentration of 1.25 wt.%. A comparative tribological analysis of the wear of UHMWPE and UHMWPE-CNTs with a stainless steel counterpart with and without Ti-DLC coating is presented. The experimental wear testing was performed on a pin-on-disc tribometer under dry conditions, using a reciprocating movement with a load of 1 N at a frequency of 2 Hz for 100,000 and 200,000 cycles. The wear tracks were analyzed with high-resolution scanning electron microscopy to determine wear modes and observe the size and shape of the wear debris. Furthermore, profilometry was used to study the depth of the wear tracks and to map the wear of the articulating surface. The wear tracks at 100,000 and 200,000 cycles on all samples were relatively shallow, and they were in the range of average roughness. It was observed that the Ti-DLC coating decreases the mass loss in the UHMWPE and the depth of the wear track. The combination of both carbon-based materials decreased the material loss compared to the system of stainless steel and UHMWPE. Burnishing of the surface was the predominant wear mode observed with all the systems, more subtle for the systems with Ti-DLC coatings. Meanwhile, in the system composed of stainless steel-UHMWPE, the intrinsic surface roughness of the material was completely replaced by the wear tracks.Keywords: CNT reinforcement, self-lubricating materials, Ti-DLC, UHMWPE tribological performance
Procedia PDF Downloads 1108561 Data Quality and Associated Factors on Regular Immunization Programme at Ararso District: Somali Region- Ethiopia
Authors: Eyob Seife, Molla Alemayaehu, Tesfalem Teshome, Bereket Seyoum, Behailu Getachew
Abstract:
Globally, immunization averts between 2 and 3 million deaths yearly, but Vaccine-Preventable Diseases still account for more in Sub-Saharan African countries and takes the majority of under-five deaths yearly, which indicates the need for consistent and on-time information to have evidence-based decision so as to save lives of these vulnerable groups. However, ensuring data of sufficient quality and promoting an information-use culture at the point of collection remains critical and challenging, especially in remote areas where the Ararso district is selected based on a hypothesis of there is a difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Ararso district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers and reporting documents were reviewed at 4 health facilities (1 health center and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio, availability and timeliness of reports. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and at the district health office. A quality index (QI), availability and timeliness of reports were assessed. Accuracy ratios formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), TT2+ and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed poor timeliness at all levels and both over-reporting and under-reporting were observed at all levels when computing the accuracy ratio of registration to health post reports found at health centers for almost all antigens verified. A quality index (QI) of all facilities also showed poor results. Most of the verified immunization data accuracy ratios were found to be relatively better than that of quality index and timeliness of reports. So attention should be given to improving the capacity of staff, timeliness of reports and quality of monitoring system components, namely recording, reporting, archiving, data analysis and using information for decisions at all levels, especially in remote and areas.Keywords: accuracy ratio, ararso district, quality of monitoring system, regular immunization program, timeliness of reports, Somali region-Ethiopia
Procedia PDF Downloads 728560 Re-Evaluation of Field X Located in Northern Lake Albert Basin to Refine the Structural Interpretation
Authors: Calorine Twebaze, Jesca Balinga
Abstract:
Field X is located on the Eastern shores of L. Albert, Uganda, on the rift flank where the gross sedimentary fill is typically less than 2,000m. The field was discovered in 2006 and encountered about 20.4m of net pay across three (3) stratigraphic intervals within the discovery well. The field covers an area of 3 km2, with the structural configuration comprising a 3-way dip-closed hanging wall anticline that seals against the basement to the southeast along the bounding fault. Field X had been mapped on reprocessed 3D seismic data, which was originally acquired in 2007 and reprocessed in 2013. The seismic data quality is good across the field, and reprocessing work reduced the uncertainty in the location of the bounding fault and enhanced the lateral continuity of reservoir reflectors. The current study was a re-evaluation of Field X to refine fault interpretation and understand the structural uncertainties associated with the field. The seismic data, and three (3) wells datasets were used during the study. The evaluation followed standard workflows using Petrel software and structural attribute analysis. The process spanned from seismic- -well tie, structural interpretation, and structural uncertainty analysis. Analysis of three (3) well ties generated for the 3 wells provided a geophysical interpretation that was consistent with geological picks. The generated time-depth curves showed a general increase in velocity with burial depth. However, separation in curve trends observed below 1100m was mainly attributed to minimal lateral variation in velocity between the wells. In addition to Attribute analysis, three velocity modeling approaches were evaluated, including the Time-Depth Curve, Vo+ kZ, and Average Velocity Method. The generated models were calibrated at well locations using well tops to obtain the best velocity model for Field X. The Time-depth method resulted in more reliable depth surfaces with good structural coherence between the TWT and depth maps with minimal error at well locations of 2 to 5m. Both the NNE-SSW rift border fault and minor faults in the existing interpretation were reevaluated. However, the new interpretation delineated an E-W trending fault in the northern part of the field that had not been interpreted before. The fault was interpreted at all stratigraphic levels and thus propagates from the basement to the surface and is an active fault today. It was also noted that the entire field is less faulted with more faults in the deeper part of the field. The major structural uncertainties defined included 1) The time horizons due to reduced data quality, especially in the deeper parts of the structure, an error equal to one-third of the reflection time thickness was assumed, 2) Check shot analysis showed varying velocities within the wells thus varying depth values for each well, and 3) Very few average velocity points due to limited wells produced a pessimistic average Velocity model.Keywords: 3D seismic data interpretation, structural uncertainties, attribute analysis, velocity modelling approaches
Procedia PDF Downloads 598559 Effect of Control Lasers Polarization on Absorption Coefficient and Refractive Index of a W-Type 4- Level Cylindrical Quantum Dot in the Presence Of Electromagnetically Induced Transparency (ETI)
Authors: Marziehossadat Moezzi
Abstract:
In this paper, electromagnetically induced transparency (EIT) is investigated in a cylindrical quantum dot (QD) with a parabolic confinement potential. We study the effect of control lasers polarization on absorption coefficient, refractive index and also on the generation of the double transparency windows in this system. Considering an effective mass method, the time-independent Schrödinger equation is solved to obtain the energy structure of the QD. Also, we study the effect of structural characteristics of the QD on refraction and absorption of the QD in the presence of EIT.Keywords: electromagnetically induced transparency, cylindrical quantum dot, absorption coefficient, refractive index
Procedia PDF Downloads 1988558 Experimental and Numerical Investigations on the Vulnerability of Flying Structures to High-Energy Laser Irradiations
Authors: Vadim Allheily, Rudiger Schmitt, Lionel Merlat, Gildas L'Hostis
Abstract:
Inflight devices are nowadays major actors in both military and civilian landscapes. Among others, missiles, mortars, rockets or even drones this last decade are increasingly sophisticated, and it is today of prior manner to develop always more efficient defensive systems from all these potential threats. In this frame, recent High Energy Laser weapon prototypes (HEL) have demonstrated some extremely good operational abilities to shot down within seconds flying targets several kilometers off. Whereas test outcomes are promising from both experimental and cost-related perspectives, the deterioration process still needs to be explored to be able to closely predict the effects of a high-energy laser irradiation on typical structures, heading finally to an effective design of laser sources and protective countermeasures. Laser matter interaction researches have a long history of more than 40 years at the French-German Research Institute (ISL). Those studies were tied with laser sources development in the mid-60s, mainly for specific metrology of fast phenomena. Nowadays, laser matter interaction can be viewed as the terminal ballistics of conventional weapons, with the unique capability of laser beams to carry energy at light velocity over large ranges. In the last years, a strong focus was made at ISL on the interaction process of laser radiation with metal targets such as artillery shells. Due to the absorbed laser radiation and the resulting heating process, an encased explosive charge can be initiated resulting in deflagration or even detonation of the projectile in flight. Drones and Unmanned Air Vehicles (UAVs) are of outmost interests in modern warfare. Those aerial systems are usually made up of polymer-based composite materials, whose complexity involves new scientific challenges. Aside this main laser-matter interaction activity, a lot of experimental and numerical knowledge has been gathered at ISL within domains like spectrometry, thermodynamics or mechanics. Techniques and devices were developed to study separately each aspect concerned by this topic; optical characterization, thermal investigations, chemical reactions analysis or mechanical examinations are beyond carried out to neatly estimate essential key values. Results from these diverse tasks are then incorporated into analytic or FE numerical models that were elaborated, for example, to predict thermal repercussion on explosive charges or mechanical failures of structures. These simulations highlight the influence of each phenomenon during the laser irradiation and forecast experimental observations with good accuracy.Keywords: composite materials, countermeasure, experimental work, high-energy laser, laser-matter interaction, modeling
Procedia PDF Downloads 2638557 Buzan Mind Mapping: An Efficient Technique for Note-Taking
Authors: T. K. Tee, M. N. A. Azman, S. Mohamed, M. Muhammad, M. M. Mohamad, J. Md Yunos, M. H. Yee, W. Othman
Abstract:
Buzan mind mapping is an efficient system of note-taking that makes revision a fun thing to do for students. Tony Buzan has been teaching children all over the world for the past thirty years and has proved that mind maps are the magic formula in the classroom for everyone. The purpose of this paper is to discuss the importance of Buzan mind mapping as a note-taking technique for the secondary school students. This paper also examines the mind mapping technique, advantages and disadvantages of hand-drawn mind maps. Samples of students’ mind maps were presented and discussed.Keywords: Buzan mind mapping, note-taking technique, hand-drawn, mind maps
Procedia PDF Downloads 5388556 Semantic Analysis of the Change in Awareness of Korean College Admission Policy
Authors: Sujin Hwang, Hyerang Park, Hyunchul Kim
Abstract:
The purpose of this study is to find the effectiveness of the admission simplification policy. The number of online news articles about ‘high school record’ was collected and semantically analyzed to identify and analyze the social awareness during 2014 to 2015. The main results of the study are as follows: First, there was a difference in expectations that the burden of the examinees would decrease as announced by KCUE. Thus, there was still a strain on the university entrance exam after the enforcement of the policy. Second, private tutoring is expanding in different forms, rather than reducing the policy. It is different from the prediction that examinees can prepare for university admissions without the private tutoring. Thus, the college admission rules currently enforced needs to be improved. The reasonable college admission system changes are discussed.Keywords: education policy, private tutoring, shadow education, education admission policy
Procedia PDF Downloads 2278555 A Risk-Based Comprehensive Framework for the Assessment of the Security of Multi-Modal Transport Systems
Authors: Mireille Elhajj, Washington Ochieng, Deeph Chana
Abstract:
The challenges of the rapid growth in the demand for transport has traditionally been seen within the context of the problems of congestion, air quality, climate change, safety, and affordability. However, there are increasing threats including those related to crime such as cyber-attacks that threaten the security of the transport of people and goods. To the best of the authors’ knowledge, this paper presents for the first time, a comprehensive framework for the assessment of the current and future security issues of multi-modal transport systems. The approach or method proposed is based on a structured framework starting with a detailed specification of the transport asset map (transport system architecture), followed by the identification of vulnerabilities. The asset map and vulnerabilities are used to identify the various approaches for exploitation of the vulnerabilities, leading to the creation of a set of threat scenarios. The threat scenarios are then transformed into risks and their categories, and include insights for their mitigation. The consideration of the mitigation space is holistic and includes the formulation of appropriate policies and tactics and/or technical interventions. The quality of the framework is ensured through a structured and logical process that identifies the stakeholders, reviews the relevant documents including policies and identifies gaps, incorporates targeted surveys to augment the reviews, and uses subject matter experts for validation. The approach to categorising security risks is an extension of the current methods that are typically employed. Specifically, the partitioning of risks into either physical or cyber categories is too limited for developing mitigation policies and tactics/interventions for transport systems where an interplay between physical and cyber processes is very often the norm. This interplay is rapidly taking on increasing significance for security as the emergence of cyber-physical technologies, are shaping the future of all transport modes. Examples include: Connected Autonomous Vehicles (CAVs) in road transport; the European Rail Traffic Management System (ERTMS) in rail transport; Automatic Identification System (AIS) in maritime transport; advanced Communications, Navigation and Surveillance (CNS) technologies in air transport; and the Internet of Things (IoT). The framework adopts a risk categorisation scheme that considers risks as falling within the following threat→impact relationships: Physical→Physical, Cyber→Cyber, Cyber→Physical, and Physical→Cyber). Thus the framework enables a more complete risk picture to be developed for today’s transport systems and, more importantly, is readily extendable to account for emerging trends in the sector that will define future transport systems. The framework facilitates the audit and retro-fitting of mitigations in current transport operations and the analysis of security management options for the next generation of Transport enabling strategic aspirations such as systems with security-by-design and co-design of safety and security to be achieved. An initial application of the framework to transport systems has shown that intra-modal consideration of security measures is sub-optimal and that a holistic and multi-modal approach that also addresses the intersections/transition points of such networks is required as their vulnerability is high. This is in-line with traveler-centric transport service provision, widely accepted as the future of mobility services. In summary, a risk-based framework is proposed for use by the stakeholders to comprehensively and holistically assess the security of transport systems. It requires a detailed understanding of the transport architecture to enable a detailed vulnerabilities analysis to be undertaken, creates threat scenarios and transforms them into risks which form the basis for the formulation of interventions.Keywords: mitigations, risk, transport, security, vulnerabilities
Procedia PDF Downloads 1658554 Computational Analysis of Thermal Degradation in Wind Turbine Spars' Equipotential Bonding Subjected to Lightning Strikes
Authors: Antonio A. M. Laudani, Igor O. Golosnoy, Ole T. Thomsen
Abstract:
Rotor blades of large, modern wind turbines are highly susceptible to downward lightning strikes, as well as to triggering upward lightning; consequently, it is necessary to equip them with an effective lightning protection system (LPS) in order to avoid any damage. The performance of existing LPSs is affected by carbon fibre reinforced polymer (CFRP) structures, which lead to lightning-induced damage in the blades, e.g. via electrical sparks. A solution to prevent internal arcing would be to electrically bond the LPS and the composite structures such that to obtain the same electric potential. Nevertheless, elevated temperatures are achieved at the joint interfaces because of high contact resistance, which melts and vaporises some of the epoxy resin matrix around the bonding. The produced high-pressure gasses open up the bonding and can ignite thermal sparks. The objective of this paper is to predict the current density distribution and the temperature field in the adhesive joint cross-section, in order to check whether the resin pyrolysis temperature is achieved and any damage is expected. The finite element method has been employed to solve both the current and heat transfer problems, which are considered weakly coupled. The mathematical model for electric current includes Maxwell-Ampere equation for induced electric field solved together with current conservation, while the thermal field is found from heat diffusion equation. In this way, the current sub-model calculates Joule heat release for a chosen bonding configuration, whereas the thermal analysis allows to determining threshold values of voltage and current density not to be exceeded in order to maintain the temperature across the joint below the pyrolysis temperature, therefore preventing the occurrence of outgassing. In addition, it provides an indication of the minimal number of bonding points. It is worth to mention that the numerical procedures presented in this study can be tailored and applied to any type of joints other than adhesive ones for wind turbine blades. For instance, they can be applied for lightning protection of aerospace bolted joints. Furthermore, they can even be customized to predict the electromagnetic response under lightning strikes of other wind turbine systems, such as nacelle and hub components.Keywords: carbon fibre reinforced polymer, equipotential bonding, finite element method, FEM, lightning protection system, LPS, wind turbine blades
Procedia PDF Downloads 1648553 Comparison Analysis of CFD Turbulence Fluid Numerical Study for Quick Coupling
Authors: JoonHo Lee, KyoJin An, JunSu Kim, Young-Chul Park
Abstract:
In this study, the fluid flow characteristics and performance numerical study through CFD model of the Non-split quick coupling for flow control in hydraulic system equipment for the aerospace business group focused to predict. In this study, we considered turbulence models for the application of Computational Fluid Dynamics for the CFD model of the Non-split Quick Coupling for aerospace business. In addition to this, the adequacy of the CFD model were verified by comparing with standard value. Based on this analysis, accurate the fluid flow characteristics can be predicted. It is, therefore, the design of the fluid flow characteristic contribute the reliability for the Quick Coupling which is required in industries on the basis of research results.Keywords: CFD, FEM, quick coupling, turbulence
Procedia PDF Downloads 3848552 Towards a Conscious Design in AI by Overcoming Dark Patterns
Authors: Ayse Arslan
Abstract:
One of the important elements underpinning a conscious design is the degree of toxicity in communication. This study explores the mechanisms and strategies for identifying toxic content by avoiding dark patterns. Given the breadth of hate and harassment attacks, this study explores a threat model and taxonomy to assist in reasoning about strategies for detection, prevention, mitigation, and recovery. In addition to identifying some relevant techniques such as nudges, automatic detection, or human-ranking, the study suggests the use of major metrics such as the overhead and friction of solutions on platforms and users or balancing false positives (e.g., incorrectly penalizing legitimate users) against false negatives (e.g., users exposed to hate and harassment) to maintain a conscious design towards fairness.Keywords: AI, ML, algorithms, policy, system design
Procedia PDF Downloads 1218551 The Application of Data Mining Technology in Building Energy Consumption Data Analysis
Authors: Liang Zhao, Jili Zhang, Chongquan Zhong
Abstract:
Energy consumption data, in particular those involving public buildings, are impacted by many factors: the building structure, climate/environmental parameters, construction, system operating condition, and user behavior patterns. Traditional methods for data analysis are insufficient. This paper delves into the data mining technology to determine its application in the analysis of building energy consumption data including energy consumption prediction, fault diagnosis, and optimal operation. Recent literature are reviewed and summarized, the problems faced by data mining technology in the area of energy consumption data analysis are enumerated, and research points for future studies are given.Keywords: data mining, data analysis, prediction, optimization, building operational performance
Procedia PDF Downloads 8528550 Absorption Control of Organic Solar Cells under LED Light for High Efficiency Indoor Power System
Authors: Premkumar Vincent, Hyeok Kim, Jin-Hyuk Bae
Abstract:
Organic solar cells have high potential which enables these to absorb much weaker light than 1-sun in indoor environment. They also have several practical advantages, such as flexibility, cost-advantage, and semi-transparency that can have superiority in indoor solar energy harvesting. We investigate organic solar cells based on poly(3-hexylthiophene) (P3HT) and indene-C60 bisadduct (ICBA) for indoor application while Finite Difference Time Domain (FDTD) simulations were run to find the optimized structure. This may provide the highest short-circuit current density to acquire high efficiency under indoor illumination.Keywords: indoor solar cells, indoor light harvesting, organic solar cells, P3HT:ICBA, renewable energy
Procedia PDF Downloads 3088549 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang
Abstract:
Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI
Procedia PDF Downloads 2688548 Features of Annual Junior Men's Kayak Training Loads in China
Authors: Liu Haitao, Wang Hengyong
Abstract:
This paper attempts to kayak, Zhaoqing City, the annual training program for young men, the deconstruction and analysis, describe the characteristics of their training load, Young people to extract the key issues for training kayak, kayak training young people to clarify in Zhaoqing City, and the cause of the bottlenecks. On one hand, scientifically arranging for the coaches to adjust training load and provide the basis for periodic structure, for young people to provide practical reference kayak athletes. On the other hand, through their training load research, enrich the theoretical system kayak training project for junior kayak athletes to provide a theoretical basis.Keywords: juniors, kayak, training programs, full year
Procedia PDF Downloads 5888547 Facile Synthesis of Copper Based Nanowires Suitable for Lithium Ion Battery Application
Authors: Zeinab Sanaee, Hossein Jafaripour
Abstract:
Copper is an excellent conductive material that is widely used in the energy devices such as Lithium-ion batteries and supercapacitors as the current collector. On the other hand, copper oxide nanowires have been used in these applications as potential electrode material. In this paper, nanowires of Copper and Copper oxide have been synthesized through a simple and time and cost-effective approach. The thermally grown Copper oxide nanowires have been converted into Copper nanowires through annealing in the Hydrogen atmosphere in a DC-PECVD system. To have a proper Copper nanostructure formation, an Au nanolayer was coated on the surface of Copper oxide nanowires. The results show the successful achievement of Copper nanowires without deformation or cracking. These structures have a great potential for Lithium-ion batteries and supercapacitors.Keywords: Copper, Copper oxide, nanowires, Hydrogen annealing, Lithium ion battery
Procedia PDF Downloads 878546 Tolerating Input Faults in Asynchronous Sequential Machines
Authors: Jung-Min Yang
Abstract:
A method of tolerating input faults for input/state asynchronous sequential machines is proposed. A corrective controller is placed in front of the considered asynchronous machine to realize model matching with a reference model. The value of the external input transmitted to the closed-loop system may change by fault. We address the existence condition for the controller that can counteract adverse effects of any input fault while maintaining the objective of model matching. A design procedure for constructing the controller is outlined. The proposed reachability condition for the controller design is validated in an illustrative example.Keywords: asynchronous sequential machines, corrective control, fault tolerance, input faults, model matching
Procedia PDF Downloads 4248545 Insights into Particle Dispersion, Agglomeration and Deposition in Turbulent Channel Flow
Authors: Mohammad Afkhami, Ali Hassanpour, Michael Fairweather
Abstract:
The work described in this paper was undertaken to gain insight into fundamental aspects of turbulent gas-particle flows with relevance to processes employed in a wide range of applications, such as oil and gas flow assurance in pipes, powder dispersion from dry powder inhalers, and particle resuspension in nuclear waste ponds, to name but a few. In particular, the influence of particle interaction and fluid phase behavior in turbulent flow on particle dispersion in a horizontal channel is investigated. The mathematical modeling technique used is based on the large eddy simulation (LES) methodology embodied in the commercial CFD code FLUENT, with flow solutions provided by this approach coupled to a second commercial code, EDEM, based on the discrete element method (DEM) which is used for the prediction of particle motion and interaction. The results generated by LES for the fluid phase have been validated against direct numerical simulations (DNS) for three different channel flows with shear Reynolds numbers, Reτ = 150, 300 and 590. Overall, the LES shows good agreement, with mean velocities and normal and shear stresses matching those of the DNS in both magnitude and position. The research work has focused on the prediction of those conditions favoring particle aggregation and deposition within turbulent flows. Simulations have been carried out to investigate the effects of particle size, density and concentration on particle agglomeration. Furthermore, particles with different surface properties have been simulated in three channel flows with different levels of flow turbulence, achieved by increasing the Reynolds number of the flow. The simulations mimic the conditions of two-phase, fluid-solid flows frequently encountered in domestic, commercial and industrial applications, for example, air conditioning and refrigeration units, heat exchangers, oil and gas suction and pressure lines. The particle size, density, surface energy and volume fractions selected are 45.6, 102 and 150 µm, 250, 1000 and 2159 kg m-3, 50, 500, and 5000 mJ m-2 and 7.84 × 10-6, 2.8 × 10-5, and 1 × 10-4, respectively; such particle properties are associated with particles found in soil, as well as metals and oxides prevalent in turbulent bounded fluid-solid flows due to erosion and corrosion of inner pipe walls. It has been found that the turbulence structure of the flow dominates the motion of the particles, creating particle-particle interactions, with most of these interactions taking place at locations close to the channel walls and in regions of high turbulence where their agglomeration is aided both by the high levels of turbulence and the high concentration of particles. A positive relationship between particle surface energy, concentration, size and density, and agglomeration was observed. Moreover, the results derived for the three Reynolds numbers considered show that the rate of agglomeration is strongly influenced for high surface energy particles by, and increases with, the intensity of the flow turbulence. In contrast, for lower surface energy particles, the rate of agglomeration diminishes with an increase in flow turbulence intensity.Keywords: agglomeration, channel flow, DEM, LES, turbulence
Procedia PDF Downloads 317