Search results for: time series data mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38110

Search results for: time series data mining

32350 Thermal Effects on Wellbore Stability and Fluid Loss in High-Temperature Geothermal Drilling

Authors: Mubarek Alpkiray, Tan Nguyen, Arild Saasen

Abstract:

Geothermal drilling operations contain numerous challenges that are encountered to increase the well cost and nonproductive time. Fluid loss is one of the most undesirable troublesome that can cause well abandonment in geothermal drilling. Lost circulation can be seen due to natural fractures, high mud weight, and extremely high formation temperatures. This challenge may cause wellbore stability problems and lead to expensive drilling operations. Wellbore stability is the main domain that should be considered to mitigate or prevent fluid loss into the formation. This paper describes the causes of fluid loss in the Pamukoren geothermal field in Turkey. A geomechanics approach integration and assessment is applied to help the understanding of fluid loss problems. In geothermal drillings, geomechanics is primarily based on rock properties, in-situ stress characterization, the temperature of the rock, determination of stresses around the wellbore, and rock failure criteria. Since a high-temperature difference between the wellbore wall and drilling fluid is presented, temperature distribution through the wellbore is estimated and implemented to the wellbore stability approach. This study reviewed geothermal drilling data to analyze temperature estimation along the wellbore, the cause of fluid loss and stored electric capacity of the reservoir. Our observation demonstrates the geomechanical approach's significant role in understanding safe drilling operations on high-temperature wells. Fluid loss is encountered due to thermal stress effects around the borehole. This paper provides a wellbore stability analysis for a geothermal drilling operation to discuss the causes of lost circulation resulting in nonproductive time and cost.

Keywords: geothermal wells, drilling, wellbore stresses, drilling fluid loss, thermal stress

Procedia PDF Downloads 175
32349 Guided Energy Theory of a Particle: Answered Questions Arise from Quantum Foundation

Authors: Desmond Agbolade Ademola

Abstract:

This work aimed to introduce a theory, called Guided Energy Theory of a particle that answered questions that arise from quantum foundation, quantum mechanics theory, and interpretation such as: what is nature of wavefunction? Is mathematical formalism of wavefunction correct? Does wavefunction collapse during measurement? Do quantum physical entanglement and many world interpretations really exist? In addition, is there uncertainty in the physical reality of our nature as being concluded in the Quantum theory? We have been able to show by the fundamental analysis presented in this work that the way quantum mechanics theory, and interpretation describes nature is not correlated with physical reality. Because, we discovered amongst others that, (1) Guided energy theory of a particle fundamentally provides complete physical observable series of quantized measurement of a particle momentum, force, energy e.t.c. in a given distance and time.In contrast, quantum mechanics wavefunction describes that nature has inherited probabilistic and indeterministic physical quantities, resulting in unobservable physical quantities that lead to many worldinterpretation.(2) Guided energy theory of a particle fundamentally predicts that it is mathematically possible to determine precise quantized measurementof position and momentum of a particle simultaneously. Because, there is no uncertainty in nature; nature however naturally guides itself against uncertainty. Contrary to the conclusion in quantum mechanics theory that, it is mathematically impossible to determine the position and the momentum of a particle simultaneously. Furthermore, we have been able to show by this theory that, it is mathematically possible to determine quantized measurement of force acting on a particle simultaneously, which is not possible on the premise of quantum mechanics theory. (3) It is evidently shown by our theory that, guided energy does not collapse, only describes the lopsided nature of a particle behavior in motion. This pretty offers us insight on gradual process of engagement - convergence and disengagement – divergence of guided energy holders which further highlight the picture how wave – like behavior return to particle-like behavior and how particle – like behavior return to wave – like behavior respectively. This further proves that the particles’ behavior in motion is oscillatory in nature. The mathematical formalism of Guided energy theory shows that nature is certainty whereas the mathematical formalism of Quantum mechanics theory shows that nature is absolutely probabilistics. In addition, the nature of wavefunction is the guided energy of the wave. In conclusion, the fundamental mathematical formalism of Quantum mechanics theory is wrong.

Keywords: momentum, physical entanglement, wavefunction, uncertainty

Procedia PDF Downloads 277
32348 Interrogation of the Role of First Year Student Experiences in Student Success at a University of Technology in South Africa

Authors: Livingstone Makondo

Abstract:

This ongoing research explores what could be the components of a comprehensive First-Year Student Experience (FYSE) at the Durban University of Technology (DUT) and the preferred implementation modalities. In light of the Siyaphumelela project, this interrogation is premised on the need to glean data for the institution that could be used to ascertain the role of FYSE towards enhancing student success. The research proceeds by examining prevalent models from other South African Universities and beyond in its quest to get at pragmatic comprehensive FYSE programme for DUT. As DUT is a student centered institution and amidst the ever shrinking economy, this research would aid higher education practitioners to ascertain if the hard earned finances are being channelled to a worthy academic venture. This research seeks to get inputs from a) students who participated in FYSE and are now in second and third years at DUT b) students who are currently participating in FYSE c) former and present Tutors d) departmental coordinators e) academics and support staff working with the participating students. This exploratory approach is preferred since 2010 DUT has grappled with how to implement an integrated institution-wide FYSE. This findings of this research could provide the much-needed data to ascertain if the current FYSE package is pivotal towards attainment of DUT Strategic Focus Area 1: Building sustainable student communities of living and learning. The ideal is to have DUT FYSE programme become an institution-wide programme that lays the foundation for consolidated and focused student development programmes for subsequent undergraduate and postgraduate levels of study. Also, armed with data from this research, DUT could develop the capacity and systems to ensure that all students get diverse on-time support to enhance their retention and academic success in their tertiary studies. In essence, the preferred FYSE curriculum woven around DUT graduate attributes should contribute towards the reduction in the first-year students’ dropout rates and subsequently in undergraduate studies. Therefore, this on-going research will feed into Siyaphumelela project and would help position 2018-2020 FYSE initiatives at DUT.

Keywords: challenges, comprehensive, dropout, transition

Procedia PDF Downloads 147
32347 Physical Verification Flow on Multiple Foundries

Authors: Rohaya Abdul Wahab, Raja Mohd Fuad Tengku Aziz, Nazaliza Othman, Sharifah Saleh, Nabihah Razali, Muhammad Al Baqir Zinal Abidin, Md Hanif Md Nasir

Abstract:

This paper will discuss how we optimize our physical verification flow in our IC Design Department having various rule decks from multiple foundries. Our ultimate goal is to achieve faster time to tape-out and avoid schedule delay. Currently the physical verification runtimes and memory usage have drastically increased with the increasing number of design rules, design complexity and the size of the chips to be verified. To manage design violations, we use a number of solutions to reduce the amount of violations needed to be checked by physical verification engineers. The most important functions in physical verifications are DRC (design rule check), LVS (layout vs. schematic) and XRC (extraction). Since we have a multiple number of foundries for our design tape-outs, we need a flow that improve the overall turnaround time and ease of use of the physical verification process. The demand for fast turnaround time is even more critical since the physical design is the last stage before sending the layout to the foundries.

Keywords: physical verification, DRC, LVS, XRC, flow, foundry, runset

Procedia PDF Downloads 641
32346 Aerodynamic Optimization of Oblique Biplane by Using Supercritical Airfoil

Authors: Asma Abdullah, Awais Khan, Reem Al-Ghumlasi, Pritam Kumari, Yasir Nawaz

Abstract:

Introduction: This study verified the potential applications of two Oblique Wing configurations that were initiated by the Germans Aerodynamicists during the WWII. Due to the end of the war, this project was not completed and in this research is targeting the revival of German Oblique biplane configuration. The research draws upon the use of two Oblique wings mounted on the top and bottom of the fuselage through a single pivot. The wings are capable of sweeping at different angles ranging from 0° at takeoff to 60° at cruising Altitude. The top wing, right half, behaves like a forward swept wing and the left half, behaves like a backward swept wing. Vice Versa applies to the lower wing. This opposite deflection of the top and lower wing cancel out the rotary moment created by each wing and the aircraft remains stable. Problem to better understand or solve: The purpose of this research is to investigate the potential of achieving improved aerodynamic performance and efficiency of flight at a wide range of sweep angles. This will help examine the most accurate value for the sweep angle at which the aircraft will possess both stability and better aerodynamics. Explaining the methods used: The Aircraft configuration is designed using Solidworks after which a series of Aerodynamic prediction are conducted, both in the subsonic and the supersonic flow regime. Computations are carried on Ansys Fluent. The results are then compared to theoretical and flight data of different Supersonic fighter aircraft of the same category (AD-1) and with the Wind tunnel testing model at subsonic speed. Results: At zero sweep angle, the aircraft has an excellent lift coefficient value with almost double that found for fighter jets. In acquiring of supersonic speed the sweep angle is increased to maximum 60 degrees depending on the mission profile. General findings: Oblique biplane can be the future fighter jet aircraft because of its high value performance in terms of aerodynamics, cost, structural design and weight.

Keywords: biplane, oblique wing, sweep angle, supercritical airfoil

Procedia PDF Downloads 261
32345 Women Empowerment in Cassava Production: A Case Study of Southwest Nigeria

Authors: Adepoju A. A., Olapade-Ogunwole F., Ganiyu M. O.

Abstract:

This study examined women's empowerment in cassava production in southwest Nigeria. The contributions of the five domains namely decision about agricultural production, decision-making power over productive resources, control of the use of income, leadership and time allocation to women disempowerment, profiled the women based on their socio-economics features and determined factors influencing women's disempowerment. Primary data were collected from the women farmers and processors through the use of structured questionnaires. Purposive sampling was used to select the LGAs and villages based on a large number of cassava farmers and processors, while cluster sampling was used to select 360 respondents in the study area. Descriptive statistics such as bar charts and percentages, Women Empowerment in Agriculture (WEAI), and the Logit regression model were used to analyze the data collected. The results revealed that 63.88% of the women were disempowered. Lack of decision-making power over productive resources; 36.47% and leadership skills; 33.26% contributed mostly to the disempowerment of the women. About 85% of the married women were disempowered, while 76.92% of the women who participated in social group activities were more empowered than their disempowered counterparts. The findings showed that women with more years of processing experience have the probability of being disempowered while those who engage in farming as a primary livelihood activity, and participate in social groups among others have the tendency to be empowered. In view of this, it was recommended that women should be encouraged to farm and contribute to social group activities.

Keywords: cassava, production, empowerment, southwest, Nigeria

Procedia PDF Downloads 42
32344 A Bi-Objective Model to Optimize the Total Time and Idle Probability for Facility Location Problem Behaving as M/M/1/K Queues

Authors: Amirhossein Chambari

Abstract:

This article proposes a bi-objective model for the facility location problem subject to congestion (overcrowding). Motivated by implementations to locate servers in internet mirror sites, communication networks, one-server-systems, so on. This model consider for situations in which immobile (or fixed) service facilities are congested (or queued) by stochastic demand to behave as M/M/1/K queues. We consider for this problem two simultaneous perspectives; (1) Customers (desire to limit times of accessing and waiting for service) and (2) Service provider (desire to limit average facility idle-time). A bi-objective model is setup for facility location problem with two objective functions; (1) Minimizing sum of expected total traveling and waiting time (customers) and (2) Minimizing the average facility idle-time percentage (service provider). The proposed model belongs to the class of mixed-integer nonlinear programming models and the class of NP-hard problems. In addition, to solve the model, controlled elitist non-dominated sorting genetic algorithms (Controlled NSGA-II) and controlled elitist non-dominated ranking genetic algorithms (NRGA-I) are proposed. Furthermore, the two proposed metaheuristics algorithms are evaluated by establishing standard multiobjective metrics. Finally, the results are analyzed and some conclusions are given.

Keywords: bi-objective, facility location, queueing, controlled NSGA-II, NRGA-I

Procedia PDF Downloads 563
32343 Prediction of Anticancer Potential of Curcumin Nanoparticles by Means of Quasi-Qsar Analysis Using Monte Carlo Method

Authors: Ruchika Goyal, Ashwani Kumar, Sandeep Jain

Abstract:

The experimental data for anticancer potential of curcumin nanoparticles was calculated by means of eclectic data. The optimal descriptors were examined using Monte Carlo method based CORAL SEA software. The statistical quality of the model is following: n = 14, R² = 0.6809, Q² = 0.5943, s = 0.175, MAE = 0.114, F = 26 (sub-training set), n =5, R²= 0.9529, Q² = 0.7982, s = 0.086, MAE = 0.068, F = 61, Av Rm² = 0.7601, ∆R²m = 0.0840, k = 0.9856 and kk = 1.0146 (test set) and n = 5, R² = 0.6075 (validation set). This data can be used to build predictive QSAR models for anticancer activity.

Keywords: anticancer potential, curcumin, model, nanoparticles, optimal descriptors, QSAR

Procedia PDF Downloads 305
32342 Investigation of Building Pounding during Earthquake and Calculation of Impact Force between Two Adjacent Structures

Authors: H. Naderpour, R. C. Barros, S. M. Khatami

Abstract:

Seismic excitation is naturally caused large horizontal relative displacements, which is able to provide collisions between two adjacent buildings due to insufficient separation distance and severe damages are occurred due to impact especially in tall buildings. In this paper, an impact is numerically simulated and two needed parameters are calculated, including impact force and energy absorption. In order to calculate mentioned parameters, mathematical study needs to model an unreal link element, which is logically assumed to be spring and dashpot to determine lateral displacement and damping ratio of impact. For the determination of dynamic response of impact, a new equation of motion is theoretically suggested to evaluate impact force and energy dissipation. In order to confirm the rendered equation, a series of parametric study are performed and the accuracy of formula is confirmed.

Keywords: pounding, impact, dissipated energy, coefficient of restitution

Procedia PDF Downloads 343
32341 Low Cost Real Time Robust Identification of Impulsive Signals

Authors: R. Biondi, G. Dys, G. Ferone, T. Renard, M. Zysman

Abstract:

This paper describes an automated implementable system for impulsive signals detection and recognition. The system uses a Digital Signal Processing device for the detection and identification process. Here the system analyses the signals in real time in order to produce a particular response if needed. The system analyses the signals in real time in order to produce a specific output if needed. Detection is achieved through normalizing the inputs and comparing the read signals to a dynamic threshold and thus avoiding detections linked to loud or fluctuating environing noise. Identification is done through neuronal network algorithms. As a setup our system can receive signals to “learn” certain patterns. Through “learning” the system can recognize signals faster, inducing flexibility to new patterns similar to those known. Sound is captured through a simple jack input, and could be changed for an enhanced recording surface such as a wide-area recorder. Furthermore a communication module can be added to the apparatus to send alerts to another interface if needed.

Keywords: sound detection, impulsive signal, background noise, neural network

Procedia PDF Downloads 304
32340 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 220
32339 Data and Spatial Analysis for Economy and Education of 28 E.U. Member-States for 2014

Authors: Alexiou Dimitra, Fragkaki Maria

Abstract:

The objective of the paper is the study of geographic, economic and educational variables and their contribution to determine the position of each member-state among the EU-28 countries based on the values of seven variables as given by Eurostat. The Data Analysis methods of Multiple Factorial Correspondence Analysis (MFCA) Principal Component Analysis and Factor Analysis have been used. The cross tabulation tables of data consist of the values of seven variables for the 28 countries for 2014. The data are manipulated using the CHIC Analysis V 1.1 software package. The results of this program using MFCA and Ascending Hierarchical Classification are given in arithmetic and graphical form. For comparison reasons with the same data the Factor procedure of Statistical package IBM SPSS 20 has been used. The numerical and graphical results presented with tables and graphs, demonstrate the agreement between the two methods. The most important result is the study of the relation between the 28 countries and the position of each country in groups or clouds, which are formed according to the values of the corresponding variables.

Keywords: Multiple Factorial Correspondence Analysis, Principal Component Analysis, Factor Analysis, E.U.-28 countries, Statistical package IBM SPSS 20, CHIC Analysis V 1.1 Software, Eurostat.eu Statistics

Procedia PDF Downloads 498
32338 Developing Open-Air Museum: The Heritage Conservation Effort, Oriented to Geotourism Concept and Education

Authors: Rinaldi Ikhram, R. A. Julia Satriani

Abstract:

The discovery of historical objects in Indonesia, especially in the area around Bandung and Priangan zone in general, have been inventorized and recorded by Dutch geologists during the colonial time. Among artefacts such as axes made of chalcedony and quartzite; arrowheads, knives, shrivel, and drill bit all made from obsidian; grindstones, even bracelet from stones. Ceramic mold for smelting bronze or iron were also found. The abundance of artefacts inspired DR. W. Docters van Leeuwen and his colleagues to initiate the establishment of Sunda Open-air Museum "Soenda Openlucht Museum" in 1917, located in the hills of North Bandung area, the site of pre-historic settlements that needs conservation. Unfortunately, this plan was not implemented because shortly after, World War II occurred. The efforts of heritage conservation is one of our responsibilities as a geologist today. Open-air Museum may be one of the solutions of heritage conservation for historic sites around the world. In this paper, the study of the development of Open-air Museum will be focused on the area of Dago, North Bandung. Method used is data analysis of field surveys, and data analysis of the remaining artefacts stored at both the National Museum in Jakarta, and the Bandung Museum of Geology. The museum is based on Geotourism and further research on pre-historic culture, while its purpose is to give people a common interest and to motivate them to participate in the research and conservation of pre-historic relics. This paper will describe more details about the concept, form, and management of the geopark and the Open-air Museum within.

Keywords: geoparks, heritage conservation, open-air museum, sustainable tourism

Procedia PDF Downloads 329
32337 Investigating the Role of Combined Length Scale Effect on the Mechanical Properties of Ni/Cu Multilayer Structures

Authors: Naresh Radaliyagoda, Nigel M. Jennett, Rong Lan, David Parfitt

Abstract:

A series of length scale engineered multilayer material with temperature robust mechanical properties has been suggested. A range of polycrystalline copper sub-layers with the thickness varying from 1 to 25μm and buried in between two nickel layers was produced using electrodeposition dual bath technique. The structure of the multilayers was characterized using Electron Backscatter Diffraction and Scanning Electron Microscope. The interface effect on the hardness and elastic modulus was tested using Nano-indentation. Results of the grain size and layer thickness measurements, and indentation hardness have been compared. It is found that there is a combined length scale effect that improves mechanical properties in Ni/Cu multilayer structures.

Keywords: nano-indentation, size effect, multilayers, electrodeposition

Procedia PDF Downloads 139
32336 Design and Development of Permanent Magnet Quadrupoles for Low Energy High Intensity Proton Accelerator

Authors: Vikas Teotia, Sanjay Malhotra, Elina Mishra, Prashant Kumar, R. R. Singh, Priti Ukarde, P. P. Marathe, Y. S. Mayya

Abstract:

Bhabha Atomic Research Centre, Trombay is developing low energy high intensity Proton Accelerator (LEHIPA) as pre-injector for 1 GeV proton accelerator for accelerator driven sub-critical reactor system (ADSS). LEHIPA consists of RFQ (Radio Frequency Quadrupole) and DTL (Drift Tube Linac) as major accelerating structures. DTL is RF resonator operating in TM010 mode and provides longitudinal E-field for acceleration of charged particles. The RF design of drift tubes of DTL was carried out to maximize the shunt impedance; this demands the diameter of drift tubes (DTs) to be as low as possible. The width of the DT is however determined by the particle β and trade-off between a transit time factor and effective accelerating voltage in the DT gap. The array of Drift Tubes inside DTL shields the accelerating particle from decelerating RF phase and provides transverse focusing to the charged particles which otherwise tends to diverge due to Columbic repulsions and due to transverse e-field at entry of DTs. The magnetic lenses housed inside DTS controls the transverse emittance of the beam. Quadrupole magnets are preferred over solenoid magnets due to relative high focusing strength of former over later. The availability of small volume inside DTs for housing magnetic quadrupoles has motivated the usage of permanent magnet quadrupoles rather than Electromagnetic Quadrupoles (EMQ). This provides another advantage as joule heating is avoided which would have added thermal loaded in the continuous cycle accelerator. The beam dynamics requires uniformity of integral magnetic gradient to be better than ±0.5% with the nominal value of 2.05 tesla. The paper describes the magnetic design of the PMQ using Sm2Co17 rare earth permanent magnets. The paper discusses the results of five pre-series prototype fabrications and qualification of their prototype permanent magnet quadrupoles and a full scale DT developed with embedded PMQs. The paper discusses the magnetic pole design for optimizing integral Gdl uniformity and the value of higher order multipoles. A novel but simple method of tuning the integral Gdl is discussed.

Keywords: DTL, focusing, PMQ, proton, rate earth magnets

Procedia PDF Downloads 457
32335 Frictional Behavior of Glass Epoxy and Aluminium Particulate Glass Epoxy Composites Sliding against Smooth Stainless Steel Counterface

Authors: Pujan Sarkar

Abstract:

Frictional behavior of glass epoxy and Al particulate glass-epoxy composites sliding against mild steel are investigated experimentally at normal atmospheric condition. Glass epoxy (0 wt% Al) and 5, 10 and 15 wt% Al particulate filled glass-epoxy composites are fabricated in conventional hand lay-up technique followed by light compression moulding process. A pin on disc type friction apparatus is used under dry sliding conditions. Experiments are carried out at a normal load of 5-50 N, and sliding speeds of 0.5-5.0 m/s for a fixed duration. Variations of friction coefficient with sliding time at different loads and speeds for all the samples are considered. Results show that the friction coefficient is influenced by sliding time, normal loads, sliding speeds, and wt% of Al content. In general, with respect to time, friction coefficient increases initially with a lot of fluctuations for a certain duration. After that, it becomes stable for the rest of the experimental time. With the increase of normal load, friction coefficient decreases at all speed levels and for all the samples whereas, friction coefficient increases with the increase of sliding speed at all normal loads for glass epoxy and 5 wt% Al content glass-epoxy composites. But for 10 and 15 wt%, Al content composites at all loads, reverse trend of friction coefficient has been recorded. Under different tribological conditions, the suitability of composites in respect of wt% of Al content is noted, and 5 wt% Al content glass-epoxy composite reports as the lowest frictional material at all loads compared to other samples.

Keywords: Al powder, composite, epoxy, friction, glass fiber

Procedia PDF Downloads 113
32334 Observer-Based Control Design for Double Integrators Systems with Long Sampling Periods and Actuator Uncertainty

Authors: Tomas Menard

Abstract:

The design of control-law for engineering systems has been investigated for many decades. While many results are concerned with continuous systems with continuous output, nowadays, many controlled systems have to transmit their output measurements through network, hence making it discrete-time. But it is well known that the sampling of a system whose control-law is based on the continuous output may render the system unstable, especially when this sampling period is long compared to the system dynamics. The control design then has to be adapted in order to cope with this issue. In this paper, we consider systems which can be modeled as double integrator with uncertainty on the input since many mechanical systems can be put under such form. We present a control scheme based on an observer using only discrete time measurement and which provides continuous time estimation of the state, combined with a continuous control law, which stabilized a system with second-order dynamics even in the presence of uncertainty. It is further shown that arbitrarily long sampling periods can be dealt with properly setting the control scheme parameters.

Keywords: dynamical system, control law design, sampled output, observer design

Procedia PDF Downloads 171
32333 Evaluating the Performance of 28 EU Member Countries on Health2020: A Data Envelopment Analysis Evaluation of the Successful Implementation of Policies

Authors: Elias K. Maragos, Petros E. Maravelakis, Apostolos I. Linardis

Abstract:

Health2020 is a promising framework of policies provided by the World Health Organization (WHO) and aiming to diminish the health and well-being inequalities among the citizens of the European Union (EU) countries. The major demographic, social and environmental changes, in addition to the resent economic crisis prevent the unobstructed and successful implementation of this framework. The unemployment rates and the percentage of people at risk of poverty have increased among the citizens of EU countries. At the same time, the adopted fiscal, economic policies do not help governments to serve their social role and mitigate social and health inequalities. In those circumstances, there is a strong pressure to organize all health system resources efficiently and wisely. In order to provide a unified and value-based framework of valuation, we propose a valuation framework using data envelopment analysis (DEA) and dynamic DEA. We believe that the adopted methodology could provide a robust tool which can capture the degree of success with which policies have been implemented and is capable to determine which of the countries developed the requested policies efficiently and which of the countries have been lagged. Using the proposed methodology, we evaluated the performance of 28 EU member-countries in relation to the Health2020 peripheral targets. We adopted several versions of evaluation, measuring the effectiveness and the efficiency of EU countries from 2011 to 2016. Our results showed stability in technological changes and revealed a group of countries which were benchmarks in most of the years for the inefficient countries.

Keywords: DEA, Health2020, health inequalities, malmquist index, policies evaluation, well-being

Procedia PDF Downloads 132
32332 Designing Creative Events with Deconstructivism Approach

Authors: Maryam Memarian, Mahmood Naghizadeh

Abstract:

Deconstruction is an approach that is entirely incompatible with the traditional prevalent architecture. Considering the fact that this approach attempts to put architecture in sharp contrast with its opposite events and transpires with attending to the neglected and missing aspects of architecture and deconstructing its stable structures. It also recklessly proceeds beyond the existing frameworks and intends to create a different and more efficient prospect for space. The aim of deconstruction architecture is to satisfy both the prospective and retrospective visions as well as takes into account all tastes of the present in order to transcend time. Likewise, it ventures to fragment the facts and symbols of the past and extract new concepts from within their heart, which coincide with today’s circumstances. Since this approach is an attempt to surpass the limits of the prevalent architecture, it can be employed to design places in which creative events occur and imagination and ambition flourish. Thought-provoking artistic events can grow and mature in such places and be represented in the best way possible to all people. The concept of event proposed in the plan grows out of the interaction between space and creation. In addition to triggering surprise and high impressions, it is also considered as a bold journey into the suspended realms of the traditional conflicts in architecture such as architecture-landscape, interior-exterior, center-margin, product-process, and stability-instability. In this project, at first, through interpretive-historical research method and examining the inputs and data collection, recognition and organizing takes place. After evaluating the obtained data using deductive reasoning, the data is eventually interpreted. Given the fact that the research topic is in its infancy and there is not a similar case in Iran with limited number of corresponding instances across the world, the selected topic helps to shed lights on the unrevealed and neglected parts in architecture. Similarly, criticizing, investigating and comparing specific and highly prized cases in other countries with the project under study can serve as an introduction into this architecture style.

Keywords: anti-architecture, creativity, deconstruction, event

Procedia PDF Downloads 308
32331 Designing Presentational Writing Assessments for the Advanced Placement World Language and Culture Exams

Authors: Mette Pedersen

Abstract:

This paper outlines the criteria that assessment specialists use when they design the 'Persuasive Essay' task for the four Advanced Placement World Language and Culture Exams (AP French, German, Italian, and Spanish). The 'Persuasive Essay' is a free-response, source-based, standardized measure of presentational writing. Each 'Persuasive Essay' item consists of three sources (an article, a chart, and an audio) and a prompt, which is a statement of the topic phrased as an interrogative sentence. Due to its richness of source materials and due to the amount of time that test takers are given to prepare for and write their responses (a total of 55 minutes), the 'Persuasive Essay' is the free-response task on the AP World Language and Culture Exams that goes to the greatest lengths to unleash the test takers' proficiency potential. The author focuses on the work that goes into designing the 'Persuasive Essay' task, outlining best practices for the selection of topics and sources, the interplay that needs to be present among the sources and the thinking behind the articulation of prompts for the 'Persuasive Essay' task. Using released 'Persuasive Essay' items from the AP World Language and Culture Exams and accompanying data on test taker performance, the author shows how different passages, and features of passages, have succeeded (and sometimes not succeeded) in eliciting writing proficiency among test takers over time. Data from approximately 215.000 test takers per year from 2014 to 2017 and approximately 35.000 test takers per year from 2012 to 2013 form the basis of this analysis. The conclusion of the study is that test taker performance improves significantly when the sources that test takers are presented with express directly opposing viewpoints. Test taker performance also improves when the interrogative prompt that the test takers respond to is phrased as a yes/no question. Finally, an analysis of linguistic difficulty and complexity levels of the printed sources reveals that test taker performance does not decrease when the complexity level of the article of the 'Persuasive Essay' increases. This last text complexity analysis is performed with the help of the 'ETS TextEvaluator' tool and the 'Complexity Scale for Information Texts (Scale)', two tools, which, in combination, provide a rubric and a fully-automated technology for evaluating nonfiction and informational texts in English translation.

Keywords: advanced placement world language and culture exams, designing presentational writing assessments, large-scale standardized assessments of written language proficiency, source-based language testing

Procedia PDF Downloads 123
32330 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing

Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel

Abstract:

There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.

Keywords: climate change, resilience, remote sensing, demographic and health surveys

Procedia PDF Downloads 149
32329 Suitable Tuning Method Selection for PID Controller Used in Digital Excitation System of Brushless Synchronous Generator

Authors: Deepak M. Sajnekar, S. B. Deshpande, R. M. Mohril

Abstract:

At present many rotary excitation control system are using analog type of Automatic Voltage Regulator which now started to replace with the digital automatic voltage regulator which is provided with PID controller and tuning of PID controller is a challenging task. The cases where digital excitation control system is used tuning of PID controller are still carried out by pole placement method. Tuning of PID controller used for static excitation control system is not challenging because it does not involve exciter time constant. This paper discusses two methods of tuning PID controller i.e. Pole placement method and pole zero cancellation method. GUI prepared for both the methods on the platform of MATLAB. Using this GUI, performance results and time required for tuning for both the methods are compared. Sensitivity of the methods is also presented with parameter variation like loop gain ‘K’ and exciter time constant ‘te’.

Keywords: digital excitation system, automatic voltage regulator, pole placement method, pole zero cancellation method

Procedia PDF Downloads 655
32328 Sparse Representation Based Spatiotemporal Fusion Employing Additional Image Pairs to Improve Dictionary Training

Authors: Dacheng Li, Bo Huang, Qinjin Han, Ming Li

Abstract:

Remotely sensed imagery with the high spatial and temporal characteristics, which it is hard to acquire under the current land observation satellites, has been considered as a key factor for monitoring environmental changes over both global and local scales. On a basis of the limited high spatial-resolution observations, challenged studies called spatiotemporal fusion have been developed for generating high spatiotemporal images through employing other auxiliary low spatial-resolution data while with high-frequency observations. However, a majority of spatiotemporal fusion approaches yield to satisfactory assumption, empirical but unstable parameters, low accuracy or inefficient performance. Although the spatiotemporal fusion methodology via sparse representation theory has advantage in capturing reflectance changes, stability and execution efficiency (even more efficient when overcomplete dictionaries have been pre-trained), the retrieval of high-accuracy dictionary and its response to fusion results are still pending issues. In this paper, we employ additional image pairs (here each image-pair includes a Landsat Operational Land Imager and a Moderate Resolution Imaging Spectroradiometer acquisitions covering the partial area of Baotou, China) only into the coupled dictionary training process based on K-SVD (K-means Singular Value Decomposition) algorithm, and attempt to improve the fusion results of two existing sparse representation based fusion models (respectively utilizing one and two available image-pair). The results show that more eligible image pairs are probably related to a more accurate overcomplete dictionary, which generally indicates a better image representation, and is then contribute to an effective fusion performance in case that the added image-pair has similar seasonal aspects and image spatial structure features to the original image-pair. It is, therefore, reasonable to construct multi-dictionary training pattern for generating a series of high spatial resolution images based on limited acquisitions.

Keywords: spatiotemporal fusion, sparse representation, K-SVD algorithm, dictionary learning

Procedia PDF Downloads 245
32327 Deployment of Electronic Healthcare Records and Development of Big Data Analytics Capabilities in the Healthcare Industry: A Systematic Literature Review

Authors: Tigabu Dagne Akal

Abstract:

Electronic health records (EHRs) can help to store, maintain, and make the appropriate handling of patient histories for proper treatment and decision. Merging the EHRs with big data analytics (BDA) capabilities enable healthcare stakeholders to provide effective and efficient treatments for chronic diseases. Though there are huge opportunities and efforts that exist in the deployment of EMRs and the development of BDA, there are challenges in addressing resources and organizational capabilities that are required to achieve the competitive advantage and sustainability of EHRs and BDA. The resource-based view (RBV), information system (IS), and non- IS theories should be extended to examine organizational capabilities and resources which are required for successful data analytics in the healthcare industries. The main purpose of this study is to develop a conceptual framework for the development of healthcare BDA capabilities based on past works so that researchers can extend. The research question was formulated for the search strategy as a research methodology. The study selection was made at the end. Based on the study selection, the conceptual framework for the development of BDA capabilities in the healthcare settings was formulated.

Keywords: EHR, EMR, Big data, Big data analytics, resource-based view

Procedia PDF Downloads 120
32326 Semantic Textual Similarity on Contracts: Exploring Multiple Negative Ranking Losses for Sentence Transformers

Authors: Yogendra Sisodia

Abstract:

Researchers are becoming more interested in extracting useful information from legal documents thanks to the development of large-scale language models in natural language processing (NLP), and deep learning has accelerated the creation of powerful text mining models. Legal fields like contracts benefit greatly from semantic text search since it makes it quick and easy to find related clauses. After collecting sentence embeddings, it is relatively simple to locate sentences with a comparable meaning throughout the entire legal corpus. The author of this research investigated two pre-trained language models for this task: MiniLM and Roberta, and further fine-tuned them on Legal Contracts. The author used Multiple Negative Ranking Loss for the creation of sentence transformers. The fine-tuned language models and sentence transformers showed promising results.

Keywords: legal contracts, multiple negative ranking loss, natural language inference, sentence transformers, semantic textual similarity

Procedia PDF Downloads 89
32325 Views from Shores Past: Palaeogeographic Reconstructions as an Aid for Interpreting the Movement of Early Modern Humans on and between the Islands of Wallacea

Authors: S. Kealy, J. Louys, S. O’Connor

Abstract:

The island archipelago that stretches between the continents of Sunda (Southeast Asia) and Sahul (Australia - New Guinea) and comprising much of modern-day Indonesia as well as Timor-Leste, represents the biogeographic region of Wallacea. The islands of Wallaea are significant archaeologically as they have never been connected to the mainlands of either Sunda or Sahul, and thus the colonization by early modern humans of these islands and subsequently Australia and New Guinea, would have necessitated some form of water crossings. Accurate palaeogeographic reconstructions of the Wallacean Archipelago for this time are important not only for modeling likely routes of colonization but also for reconstructing likely landscapes and hence resources available to the first colonists. Here we present five digital reconstructions of coastal outlines of Wallacea and Sahul (Australia and New Guinea) for the periods 65, 60, 55, 50, and 45,000 years ago using the latest bathometric chart and a sea-level model that is adjusted to account for the average uplift rate known from Wallacea. This data was also used to reconstructed island areal extent as well as topography for each time period. These reconstructions allowed us to determine the distance from the coast and relative elevation of the earliest archaeological sites for each island where such records exist. This enabled us to approximate how much effort exploitation of coastal resources would have taken for early colonists, and how important such resources were. These reconstructions also allowed us to estimate visibility for each island in the archipelago, and to model how intervisible each island was during the period of likely human colonisation. We demonstrate how these models provide archaeologists with an important basis for visualising this ancient landscape and interpreting how it was originally viewed, traversed and exploited by its earliest modern human inhabitants.

Keywords: Wallacea, palaeogeographic reconstructions, islands, intervisibility

Procedia PDF Downloads 186
32324 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action

Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere

Abstract:

Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.

Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results

Procedia PDF Downloads 121
32323 The Next Generation Neutrinoless Double-Beta Decay Experiment nEXO

Authors: Ryan Maclellan

Abstract:

The nEXO Collaboration is designing a very large detector for neutrinoless double beta decay of Xe-136. The nEXO detector is rooted in the current EXO-200 program, which has reached a sensitivity for the half-life of the decay of 1.9x10^25 years with an exposure of 99.8 kg-y. The baseline nEXO design assumes 5 tonnes of liquid xenon, enriched in the mass 136 isotope, within a time projection chamber. The detector is being designed to reach a half-life sensitivity of > 5x10^27 years covering the inverted neutrino mass hierarchy, with 5 years of data. We present the nEXO detector design, the current status of R&D efforts, and the physics case for the experiment.

Keywords: double-beta, Majorana, neutrino, neutrinoless

Procedia PDF Downloads 405
32322 Prediction of Compressive Strength of Concrete from Early Age Test Result Using Design of Experiments (Rsm)

Authors: Salem Alsanusi, Loubna Bentaher

Abstract:

Response Surface Methods (RSM) provide statistically validated predictive models that can then be manipulated for finding optimal process configurations. Variation transmitted to responses from poorly controlled process factors can be accounted for by the mathematical technique of propagation of error (POE), which facilitates ‘finding the flats’ on the surfaces generated by RSM. The dual response approach to RSM captures the standard deviation of the output as well as the average. It accounts for unknown sources of variation. Dual response plus propagation of error (POE) provides a more useful model of overall response variation. In our case, we implemented this technique in predicting compressive strength of concrete of 28 days in age. Since 28 days is quite time consuming, while it is important to ensure the quality control process. This paper investigates the potential of using design of experiments (DOE-RSM) to predict the compressive strength of concrete at 28th day. Data used for this study was carried out from experiment schemes at university of Benghazi, civil engineering department. A total of 114 sets of data were implemented. ACI mix design method was utilized for the mix design. No admixtures were used, only the main concrete mix constituents such as cement, coarse-aggregate, fine aggregate and water were utilized in all mixes. Different mix proportions of the ingredients and different water cement ratio were used. The proposed mathematical models are capable of predicting the required concrete compressive strength of concrete from early ages.

Keywords: mix proportioning, response surface methodology, compressive strength, optimal design

Procedia PDF Downloads 250
32321 Numerical Simulation of a Three-Dimensional Framework under the Action of Two-Dimensional Moving Loads

Authors: Jia-Jang Wu

Abstract:

The objective of this research is to develop a general technique so that one may predict the dynamic behaviour of a three-dimensional scale crane model subjected to time-dependent moving point forces by means of conventional finite element computer packages. To this end, the whole scale crane model is divided into two parts: the stationary framework and the moving substructure. In such a case, the dynamic responses of a scale crane model can be predicted from the forced vibration responses of the stationary framework due to actions of the four time-dependent moving point forces induced by the moving substructure. Since the magnitudes and positions of the moving point forces are dependent on the relative positions between the trolley, moving substructure and the stationary framework, it can be found from the numerical results that the time histories for the moving speeds of the moving substructure and the trolley are the key factors affecting the dynamic responses of the scale crane model.

Keywords: moving load, moving substructure, dynamic responses, forced vibration responses

Procedia PDF Downloads 335