Search results for: threshold point
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5546

Search results for: threshold point

4706 Geothermal Energy Evaluation of Lower Benue Trough Using Spectral Analysis of Aeromagnetic Data

Authors: Stella C. Okenu, Stephen O. Adikwu, Martins E. Okoro

Abstract:

The geothermal energy resource potential of the Lower Benue Trough (LBT) in Nigeria was evaluated in this study using spectral analysis of high-resolution aeromagnetic (HRAM) data. The reduced to the equator aeromagnetic data was divided into sixteen (16) overlapping blocks, and each of the blocks was analyzed to obtain the radial averaged power spectrum which enabled the computation of the top and centroid depths to magnetic sources. The values were then used to assess the Curie Point Depth (CPD), geothermal gradients, and heat flow variations in the study area. Results showed that CPD varies from 7.03 to 18.23 km, with an average of 12.26 km; geothermal gradient values vary between 31.82 and 82.50°C/km, with an average of 51.21°C/km, while heat flow variations range from 79.54 to 206.26 mW/m², with an average of 128.02 mW/m². Shallow CPD zones that run from the eastern through the western and southwestern parts of the study area correspond to zones of high geothermal gradient values and high subsurface heat flow distributions. These areas signify zones associated with anomalous subsurface thermal conditions and are therefore recommended for detailed geothermal energy exploration studies.

Keywords: geothermal energy, curie-point depth, geothermal gradient, heat flow, aeromagnetic data, LBT

Procedia PDF Downloads 54
4705 A Large Dataset Imputation Approach Applied to Country Conflict Prediction Data

Authors: Benjamin Leiby, Darryl Ahner

Abstract:

This study demonstrates an alternative stochastic imputation approach for large datasets when preferred commercial packages struggle to iterate due to numerical problems. A large country conflict dataset motivates the search to impute missing values well over a common threshold of 20% missingness. The methodology capitalizes on correlation while using model residuals to provide the uncertainty in estimating unknown values. Examination of the methodology provides insight toward choosing linear or nonlinear modeling terms. Static tolerances common in most packages are replaced with tailorable tolerances that exploit residuals to fit each data element. The methodology evaluation includes observing computation time, model fit, and the comparison of known values to replaced values created through imputation. Overall, the country conflict dataset illustrates promise with modeling first-order interactions while presenting a need for further refinement that mimics predictive mean matching.

Keywords: correlation, country conflict, imputation, stochastic regression

Procedia PDF Downloads 105
4704 Adaptive Power Control of the City Bus Integrated Photovoltaic System

Authors: Piotr Kacejko, Mariusz Duk, Miroslaw Wendeker

Abstract:

This paper presents an adaptive controller to track the maximum power point of a photovoltaic modules (PV) under fast irradiation change on the city-bus roof. Photovoltaic systems have been a prominent option as an additional energy source for vehicles. The Municipal Transport Company (MPK) in Lublin has installed photovoltaic panels on its buses roofs. The solar panels turn solar energy into electric energy and are used to load the buses electric equipment. This decreases the buses alternators load, leading to lower fuel consumption and bringing both economic and ecological profits. A DC–DC boost converter is selected as the power conditioning unit to coordinate the operating point of the system. In addition to the conversion efficiency of a photovoltaic panel, the maximum power point tracking (MPPT) method also plays a main role to harvest most energy out of the sun. The MPPT unit on a moving vehicle must keep tracking accuracy high in order to compensate rapid change of irradiation change due to dynamic motion of the vehicle. Maximum power point track controllers should be used to increase efficiency and power output of solar panels under changing environmental factors. There are several different control algorithms in the literature developed for maximum power point tracking. However, energy performances of MPPT algorithms are not clarified for vehicle applications that cause rapid changes of environmental factors. In this study, an adaptive MPPT algorithm is examined at real ambient conditions. PV modules are mounted on a moving city bus designed to test the solar systems on a moving vehicle. Some problems of a PV system associated with a moving vehicle are addressed. The proposed algorithm uses a scanning technique to determine the maximum power delivering capacity of the panel at a given operating condition and controls the PV panel. The aim of control algorithm was matching the impedance of the PV modules by controlling the duty cycle of the internal switch, regardless of changes of the parameters of the object of control and its outer environment. Presented algorithm was capable of reaching the aim of control. The structure of an adaptive controller was simplified on purpose. Since such a simple controller, armed only with an ability to learn, a more complex structure of an algorithm can only improve the result. The presented adaptive control system of the PV system is a general solution and can be used for other types of PV systems of both high and low power. Experimental results obtained from comparison of algorithms by a motion loop are presented and discussed. Experimental results are presented for fast change in irradiation and partial shading conditions. The results obtained clearly show that the proposed method is simple to implement with minimum tracking time and high tracking efficiency proving superior to the proposed method. This work has been financed by the Polish National Centre for Research and Development, PBS, under Grant Agreement No. PBS 2/A6/16/2013.

Keywords: adaptive control, photovoltaic energy, city bus electric load, DC-DC converter

Procedia PDF Downloads 196
4703 The Importance of Teachers´ Self-Efficacy in the Field of Education of Socially Disadvantaged Students

Authors: Anna Petr Safrankova, Karla Hrbackova

Abstract:

The education of socially disadvantaged students is in the long term spotlight of many pedagogical researches in both Czech and foreign environment. These researches among others investigate this topic from the point of view of individual compensatory measure which tries to overcome or remove the social disadvantage. The focus of the study is to highlight the important role of teachers in the education of this specific group of students, among others in terms of their (teachers´) pre-graduate training. The aim of the study is to point out the importance of teachers´ self-efficacy. The study is based on the assumption that the teacher's self-efficacy may significantly affect the teacher's perception of a particular group of students and thereby affect the education of the students. The survey involved 245 teachers from the two regions in the Czech Republic. In the research were used TES questionnaire (with the dimensions personal teaching efficacy – PTE and general teaching efficacy – GTE) by Gibson and Dembo and the semantic differential (containing 12 scales with bipolar adjectives) which investigated the components of teachers' attitudes toward socially disadvantaged students. It was found that teachers’ self-efficacy significantly affects the teachers’ perception of the group of socially disadvantaged students. Based on this finding we believe that it is necessary to work with this concept (prepare teachers to educate this specific group of students) already during higher education and especially during the pre-graduate teachers training.

Keywords: teachers, socially disadvantaged students, semantic differential, teachers self-efficacy

Procedia PDF Downloads 409
4702 Identifying Necessary Words for Understanding Academic Articles in English as a Second or a Foreign Language

Authors: Stephen Wagman

Abstract:

This paper identifies three common structures in English sentences that are important for understanding academic texts, regardless of the characteristics or background of the readers or whether they are reading English as a second or a foreign language. Adapting a model from the Humanities, the explication of texts used in literary studies, the paper analyses sample sentences to reveal structures that enable the reader not only to decide which words are necessary for understanding the main ideas but to make the decision without knowing the meaning of the words. By their very syntax noun structures point to the key word for understanding them. As a rule, the key noun is followed by easily identifiable prepositions, relative pronouns, or verbs and preceded by single adjectives. With few exceptions, the modifiers are unnecessary for understanding the idea of the sentence. In addition, sentences are often structured by lists in which the items frequently consist of parallel groups of words. The principle of a list is that all the items are similar in meaning and it is not necessary to understand all of the items to understand the point of the list. This principle is especially important when the items are long or there is more than one list in the same sentence. The similarity in meaning of these items enables readers to reduce sentences that are hard to grasp to an understandable core without excessive use of a dictionary. Finally, the idea of subordination and the identification of the subordinate parts of sentences through connecting words makes it possible for readers to focus on main ideas without having to sift through the less important and more numerous secondary structures. Sometimes a main idea requires a subordinate one to complete its meaning, but usually, subordinate ideas are unnecessary for understanding the main point of the sentence and its part in the development of the argument from sentence to sentence. Moreover, the connecting words themselves indicate the functions of the subordinate structures. These most frequently show similarity and difference or reasons and results. Recognition of all of these structures can not only enable students to read more efficiently but to focus their attention on the development of the argument and this rather than a multitude of unknown vocabulary items, the repetition in lists, or the subordination in sentences are the one necessary element for comprehension of academic articles.

Keywords: development of the argument, lists, noun structures, subordination

Procedia PDF Downloads 238
4701 Fast Fourier Transform-Based Steganalysis of Covert Communications over Streaming Media

Authors: Jinghui Peng, Shanyu Tang, Jia Li

Abstract:

Steganalysis seeks to detect the presence of secret data embedded in cover objects, and there is an imminent demand to detect hidden messages in streaming media. This paper shows how a steganalysis algorithm based on Fast Fourier Transform (FFT) can be used to detect the existence of secret data embedded in streaming media. The proposed algorithm uses machine parameter characteristics and a network sniffer to determine whether the Internet traffic contains streaming channels. The detected streaming data is then transferred from the time domain to the frequency domain through FFT. The distributions of power spectra in the frequency domain between original VoIP streams and stego VoIP streams are compared in turn using t-test, achieving the p-value of 7.5686E-176 which is below the threshold. The results indicate that the proposed FFT-based steganalysis algorithm is effective in detecting the secret data embedded in VoIP streaming media.

Keywords: steganalysis, security, Fast Fourier Transform, streaming media

Procedia PDF Downloads 128
4700 Creativity in Industrial Design as an Instrument for the Achievement of the Proper and Necessary Balance between Intuition and Reason, Design and Science

Authors: Juan Carlos Quiñones

Abstract:

Time has passed since the industrial design has put murder on a mass-production basis. The industrial design applies methods from different disciplines with a strategic approach, to place humans at the centers of the design process and to deliver solutions that are meaningful and desirable for users and for the market. This analysis summarizes some of the discussions that occurred in the 6th International Forum of Design as a Process, June 2016, Valencia. The aims of this conference were finding new linkages between systems and design interactions in order to define the social consequences. Through knowledge management we are able to transform the intangible aspect by using design as a transforming function capable of converting intangible knowledge into tangible solutions (i.e. products and services demanded by society). Industrial designers use knowledge consciously as a starting point for the ideation of the product. The handling of the intangible becomes more and more relevant over time as different methods emerge for knowledge extraction and subsequent organization. The different methodologies applied to the industrial design discipline and the evolution of the same discipline methods underpin the cultural and scientific background knowledge as a starting point of thought as a response to the needs; the whole thing coming through the instrument of creativity for the achievement of the proper and necessary balance between intuition and reason, design and science.

Keywords: creative process, creativity, industrial design, intangible

Procedia PDF Downloads 275
4699 Deconstructing Abraham Maslow’s Hierarchy of Needs: A Comparison of Organizational Behaviour and Branding Perspectives

Authors: Satya Girish Goparaju

Abstract:

It is said that the pyramid of Needs is not an invention by Maslow but only a graphical representation of his theory. It is also interesting to note how business management schools have adopted this interpreted theory to organizational behavior and marketing subjects. Against this background, this article attempts to raise the point that the hierarchy of needs proposed by Abraham Maslow need not necessarily be represented in a pyramid, but a linear model would be more eligible in the present times. To propose this point, this article presents needs a comparative study of ‘self-actualization’ (the apex of the pyramid) in organizational behavior and branding contexts, respectively. This article tries to shed light on the original theory proposed by Maslow, which stated that self-actualization is attained through living one’s life completely and not by satisfying individual needs. Therefore, in an organizational behavior perspective, it can be understood that self-actualization is irrelevant as an employee’s life is not the work and the satisfied needs in a workplace will only make the employee perform better. In the same way, a brand does not sell products to satisfy all needs of a consumer and does not have a role directly in attaining self-actualization. For the purpose of this study, select employees of a branding agency will participate in responding to a questionnaire to answer both as employees of an organization and also as consumers of a global smartphone brand. This study aims to deconstruct the interpretations that have been widely accepted by both organizational behavior and branding professionals.

Keywords: branding, marketing, needs, organizational behavior, psychology

Procedia PDF Downloads 214
4698 Applying Unmanned Aerial Vehicle on Agricultural Damage: A Case Study of the Meteorological Disaster on Taiwan Paddy Rice

Authors: Chiling Chen, Chiaoying Chou, Siyang Wu

Abstract:

Taiwan locates at the west of Pacific Ocean and intersects between continental and marine climate. Typhoons frequently strike Taiwan and come with meteorological disasters, i.e., heavy flooding, landslides, loss of life and properties, etc. Global climate change brings more extremely meteorological disasters. So, develop techniques to improve disaster prevention and mitigation is needed, to improve rescue processes and rehabilitations is important as well. In this study, UAVs (Unmanned Aerial Vehicles) are applied to take instant images for improving the disaster investigation and rescue processes. Paddy rice fields in the central Taiwan are the study area. There have been attacked by heavy rain during the monsoon season in June 2016. UAV images provide the high ground resolution (3.5cm) with 3D Point Clouds to develop image discrimination techniques and digital surface model (DSM) on rice lodging. Firstly, image supervised classification with Maximum Likelihood Method (MLD) is used to delineate the area of rice lodging. Secondly, 3D point clouds generated by Pix4D Mapper are used to develop DSM for classifying the lodging levels of paddy rice. As results, discriminate accuracy of rice lodging is 85% by image supervised classification, and the classification accuracy of lodging level is 87% by DSM. Therefore, UAVs not only provide instant images of agricultural damage after the meteorological disaster, but the image discriminations on rice lodging also reach acceptable accuracy (>85%). In the future, technologies of UAVs and image discrimination will be applied to different crop fields. The results of image discrimination will be overlapped with administrative boundaries of paddy rice, to establish GIS-based assist system on agricultural damage discrimination. Therefore, the time and labor would be greatly reduced on damage detection and monitoring.

Keywords: Monsoon, supervised classification, Pix4D, 3D point clouds, discriminate accuracy

Procedia PDF Downloads 289
4697 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 143
4696 Microbial Diversity Assessment in Household Point-of-Use Water Sources Using Spectroscopic Approach

Authors: Syahidah N. Zulkifli, Herlina A. Rahim, Nurul A. M. Subha

Abstract:

Sustaining water quality is critical in order to avoid any harmful health consequences for end-user consumers. The detection of microbial impurities at the household level is the foundation of water security. Water quality is now monitored only at water utilities or infrastructure, such as water treatment facilities or reservoirs. This research provides a first-hand scientific understanding of microbial composition presence in Malaysia’s household point-of-use (POUs) water supply influenced by seasonal fluctuations, standstill periods, and flow dynamics by using the NIR-Raman spectroscopic technique. According to the findings, 20% of water samples were contaminated by pathogenic bacteria, which are Legionella and Salmonella cells. A comparison of the spectra reveals significant signature peaks (420 cm⁻¹ to 1800 cm⁻¹), including species-specific bands. This demonstrates the importance of regularly monitoring POUs water quality to provide a safe and clean water supply to homeowners. Conventional Raman spectroscopy, up-to-date, is no longer suited for real-time monitoring. Therefore, this study introduced an alternative micro-spectrometer to give a rapid and sustainable way of monitoring POUs water quality. Assessing microbiological threats in water supply becomes more reliable and efficient by leveraging IoT protocol.

Keywords: microbial contaminants, water quality, water monitoring, Raman spectroscopy

Procedia PDF Downloads 86
4695 Two-Dimensional Nanostack Based On Chip Wiring

Authors: Nikhil Jain, Bin Yu

Abstract:

The material behavior of graphene, a single layer of carbon lattice, is extremely sensitive to its dielectric environment. We demonstrate improvement in electronic performance of graphene nanowire interconnects with full encapsulation by lattice-matching, chemically inert, 2D layered insulator hexagonal boron nitride (h-BN). A novel layer-based transfer technique is developed to construct the h-BN/MLG/h-BN heterostructures. The encapsulated graphene wires are characterized and compared with that on SiO2 or h-BN substrate without passivating h-BN layer. Significant improvements in maximum current-carrying density, breakdown threshold, and power density in encapsulated graphene wires are observed. These critical improvements are achieved without compromising the carrier transport characteristics in graphene. Furthermore, graphene wires exhibit electrical behavior less insensitive to ambient conditions, as compared with the non-passivated ones. Overall, h-BN/graphene/h-BN heterostructure presents a robust material platform towards the implementation of high-speed carbon-based interconnects.

Keywords: two-dimensional nanosheet, graphene, hexagonal boron nitride, heterostructure, interconnects

Procedia PDF Downloads 434
4694 Impact Logistic Management to Reduce Costs

Authors: Waleerak Sittisom

Abstract:

The objectives of this research were to analyze transportation route management, to identify potential cost reductions in logistic operation. In-depth interview techniques and small group discussions were utilized with 25 participants from various backgrounds in the areas of logistics. The findings of this research revealed that there were four areas that companies are able to effectively manage a logistic cost reduction: managing the space within the transportation vehicles, managing transportation personnel, managing transportation cost, and managing control of transportation. On the other hand, there were four areas that companies were unable to effectively manage a logistic cost reduction: the working process of transportation, the route planning of transportation, the service point management, and technology management. There are five areas that cost reduction is feasible: personnel management, process of working, map planning, service point planning, and technology implementation. To be able to reduce costs, the transportation companies should suggest that customers use a file system to save truck space. Also, the transportation companies need to adopt new technology to manage their information system so that packages can be reached easy, safe, and fast. Staff needs to be trained regularly to increase knowledge and skills. Teamwork is required to effectively reduce the costs.

Keywords: cost reduction, management, logistics, transportation

Procedia PDF Downloads 486
4693 Open Circuit MPPT Control Implemented for PV Water Pumping System

Authors: Rabiaa Gammoudi, Najet Rebei, Othman Hasnaoui

Abstract:

Photovoltaic systems use different techniques for tracking the Maximum Power Point (MPPT) to provide the highest possible power to the load regardless of the climatic conditions variation. In this paper, the proposed method is the Open Circuit (OC) method with sudden and random variations of insolation. The simulation results of the water pumping system controlled by OC method are validated by an experimental experience in real-time using a test bench composed by a centrifugal pump powered by a PVG via a boost chopper for the adaptation between the source and the load. The output of the DC/DC converter supplies the motor pump LOWARA type, assembly by means of a DC/AC inverter. The control part is provided by a computer incorporating a card DS1104 running environment Matlab/Simulink for visualization and data acquisition. These results show clearly the effectiveness of our control with a very good performance. The results obtained show the usefulness of the developed algorithm in solving the problem of degradation of PVG performance depending on the variation of climatic factors with a very good yield.

Keywords: PVWPS (PV Water Pumping System), maximum power point tracking (MPPT), open circuit method (OC), boost converter, DC/AC inverter

Procedia PDF Downloads 435
4692 Application of Artificial Neural Network in Initiating Cleaning Of Photovoltaic Solar Panels

Authors: Mohamed Mokhtar, Mostafa F. Shaaban

Abstract:

Among the challenges facing solar photovoltaic (PV) systems in the United Arab Emirates (UAE), dust accumulation on solar panels is considered the most severe problem that faces the growth of solar power plants. The accumulation of dust on the solar panels significantly degrades output from these panels. Hence, solar PV panels have to be cleaned manually or using costly automated cleaning methods. This paper focuses on initiating cleaning actions when required to reduce maintenance costs. The cleaning actions are triggered only when the dust level exceeds a threshold value. The amount of dust accumulated on the PV panels is estimated using an artificial neural network (ANN). Experiments are conducted to collect the required data, which are used in the training of the ANN model. Then, this ANN model will be fed by the output power from solar panels, ambient temperature, and solar irradiance, and thus, it will be able to estimate the amount of dust accumulated on solar panels at these conditions. The model was tested on different case studies to confirm the accuracy of the developed model.

Keywords: machine learning, dust, PV panels, renewable energy

Procedia PDF Downloads 127
4691 Numerical Response of Planar HPGe Detector for 241Am Contamination of Various Shapes

Authors: M. Manohari, Himanshu Gupta, S. Priyadharshini, R.Santhanam, S.Chandrasekaran, B|.Venkatraman

Abstract:

Injection is one of the potential routes of intake in a radioactive facility. The internal dose due to this intake is monitored at the radiation emergency medical centre, IGCAR using a portable planar HPGe detector. The contaminated wound may be having different shapes. In a reprocessing potential of wound contamination with actinide is more. Efficiency is one of the input parameters for estimation of internal dose. Estimating these efficiencies experimentally would be tedious and cumbersome. Numerical estimation can be a supplement to experiment. As an initial step in this study 241Am contamination of different shapes are studied. In this study portable planar HPGe detector was modeled using Monte Carlo code FLUKA and the effect of different parameters like distance of the contamination from the detector, radius of the circular contamination were studied. Efficiency values for point and surface contamination located at different distances were estimated. The effect of efficiency on the radius of the surface source was more predominant when the source is at 1 cm distance compared to when the source to detector distance is 10 cm. At 1 cm the efficiency decreased quadratically as the radius increased and at 10 cm it decreased linearly. The point source efficiency varied exponentially with source to detector distance.

Keywords: Planar HPGe, efficiency value, injection, surface source

Procedia PDF Downloads 25
4690 Humanity's Still Sub-Quantum Core-Self Intelligence

Authors: Andrew Shugyo Daijo Bonnici

Abstract:

Core-Self Intelligence (CSI) is an absolutely still, non-verbal, non-cerebral intelligence. Our still core-self intelligence is felt at our body's center point of gravity, just an inch below our navel, deep within our lower abdomen. The still sub-quantum depth of core-Self remains untouched by the conditioning influences of family, society, culture, religion, and spiritual views that shape our personalities and ego-self identities. As core-Self intelligence is inborn and unconditioned, it exists within all human beings regardless of age, race, color, creed, mental acuity, or national origin. Our core-self intelligence functions as a wise and compassionate guide that advances our health and well-being, our mental clarity and emotional resiliency, our fearless peace and behavioral wisdom, and our ever-deepening compassion for self and others. Although our core-Self, with its absolutely still non-judgmental intelligence, operates far beneath the functioning of our ego-self identity and our thinking mind, it effectively coexists with our passing thoughts, all of our figuring and thinking, our logical and rational way of knowing, the ebb and flow of our feelings, and the natural or triggered emergence of our emotions. When we allow our whole inner somatic awareness to gently sink into the intelligent center point of gravity within our lower abdomen, the felt arising of our core- Self’s inborn stillness has a serene and relaxing effect on our ego-self and thinking mind. It naturally slows down the speedy passage of our involuntary thoughts, diminishes our ego-self's defensive and reactive functioning, and decreases narcissistic reflections on I, me, and mine. All of these healthy cognitive benefits advance our innate wisdom and compassion, facilitate our personal and interpersonal growth, and liberate the ever-fresh wonder and curiosity of our beginner's heartmind. In conclusion, by studying, exploring, and researching our core-Self intelligence, psychologists and psychotherapists can unlock new avenues for advancing the farther reaches of our mental, emotional, and spiritual health and well-being, our innate behavioral wisdom and boundless empathy, our lucid compassion for self and others, and our unwavering confidence in the still guiding light of our core-Self that exists at the abdominal center point of all human beings.

Keywords: intelligence, transpersonal, beginner’s heartmind, compassionate wisdom

Procedia PDF Downloads 48
4689 Digital Game Fostering Spatial Abilities for Children with Special Needs

Authors: Pedro Barros, Ana Breda, Eugenio Rocha, M. Isabel Santos

Abstract:

As visual and spatial awareness develops, children apprehension of the concept of direction, (relative) distance and (relative) location materializes. Here we present the educational inclusive digital game ORIESPA, under development by the Thematic Line Geometrix, for children aged between 6 and 10 years old, aiming the improvement of their visual and spatial awareness. Visual-spatial abilities are of crucial importance to succeed in many everyday life tasks. Unavoidable in the technological age we are living in, they are essential in many fields of study as, for instance, mathematics.The game, set on a 2D/3D environment, focusses in tasks/challenges on the following categories (1) static orientation of the subject and object, requiring an understanding of the notions of up–down, left–right, front–back, higher-lower or nearer-farther; (2) interpretation of perspectives of three-dimensional objects, requiring the understanding of 2D and 3D representations of three-dimensional objects; and (3) orientation of the subject in real space, requiring the reading and interpreting of itineraries. In ORIESPA, simpler tasks are based on a quadrangular grid, where the front-back and left-right directions and the rotations of 90º, 180º and 270º play the main requirements. The more complex ones are produced on a cubic grid adding the up and down movements. In the first levels, the game's mechanics regarding the reading and interpreting maps (from point A to point B) is based on map routes, following a given set of instructions. In higher levels, the player must produce a list of instructions taking the game character to the desired destination, avoiding obstacles. Being an inclusive game the user has the possibility to interact through the mouse (point and click with a single button), the keyboard (small set of well recognized keys) or a Kinect device (using simple gesture moves). The character control requires the action on buttons corresponding to movements in 2D and 3D environments. Buttons and instructions are also complemented with text, sound and sign language.

Keywords: digital game, inclusion, itinerary, spatial ability

Procedia PDF Downloads 164
4688 Geomechanical Technologies for Assessing Three-Dimensional Stability of Underground Excavations Utilizing Remote-Sensing, Finite Element Analysis, and Scientific Visualization

Authors: Kwang Chun, John Kemeny

Abstract:

Light detection and ranging (LiDAR) has been a prevalent remote-sensing technology applied in the geological fields due to its high precision and ease of use. One of the major applications is to use the detailed geometrical information of underground structures as a basis for the generation of a three-dimensional numerical model that can be used in a geotechnical stability analysis such as FEM or DEM. To date, however, straightforward techniques in reconstructing the numerical model from the scanned data of the underground structures have not been well established or tested. In this paper, we propose a comprehensive approach integrating all the various processes, from LiDAR scanning to finite element numerical analysis. The study focuses on converting LiDAR 3D point clouds of geologic structures containing complex surface geometries into a finite element model. This methodology has been applied to Kartchner Caverns in Arizona, where detailed underground and surface point clouds can be used for the analysis of underground stability. Numerical simulations were performed using the finite element code Abaqus and presented by 3D computing visualization solution, ParaView. The results are useful in studying the stability of all types of underground excavations including underground mining and tunneling.

Keywords: finite element analysis, LiDAR, remote-sensing, scientific visualization, underground stability

Procedia PDF Downloads 150
4687 Point-Mutation in a Rationally Engineered Esterase Inverts its Enantioselectivity

Authors: Yasser Gaber, Mohamed Ismail, Serena Bisagni, Mohamad Takwa, Rajni Hatti-Kaul

Abstract:

Enzymes are safe and selective catalysts. They skillfully catalyze chemical reactions; however, the native form is not usually suitable for industrial applications. Enzymes are therefore engineered by several techniques to meet the required catalytic task. Clopidogrel is recorded among the five best selling pharmaceutical in 2010 under the brand name Plavix. The commonly used route for production of the drug on an industrial scale is the synthesis of the racemic mixture followed by diastereomeric resolution to obtain the pure S isomer. The process consumes a lot of solvents and chemicals. We have evaluated a biocatalytic cleaner approach for asymmetric hydrolysis of racemic clopidogrel. Initial screening of a selected number of hydrolases showed only one enzyme EST to exhibit activity and selectivity towards the desired stereoisomer. As the crude EST is a mixture of several isoenzymes, a homology model of EST-1 was used in molecular dynamic simulations to study the interaction of the enzyme with R and S isomers of clopidogrel. Analysis of the geometric hindrances of the tetrahedral intermediates revealed a potential site for mutagenesis in order to improve the activity and the selectivity. Single point mutation showed dramatic increase in activity and inversion of the enantioselectivity (400 fold change in E value).

Keywords: biocatalysis, biotechnology, enzyme, protein engineering, molecular modeling

Procedia PDF Downloads 427
4686 Objective vs. Perceived Quality in the Cereal Industry

Authors: Albena Ivanova, Jill Kurp, Austin Hampe

Abstract:

Cereal products in the US contain rich information on the front of the package (FOP) as well as point-of-purchase (POP) summaries provided by the store. These summaries frequently are confusing and misleading to the consumer. This study explores the relationship between perceived quality, objective quality, price, and value in the cold cereal industry. A total of 270 cold cereal products were analyzed and the price, quality and value for different summaries were compared using ANOVA tests. The results provide evidence that the United States Department of Agriculture Organic FOP/POP are related to higher objective quality, higher price, but not to a higher value. Whole grain FOP/POP related to a higher objective quality, lower or similar price, and higher value. Heart-healthy POP related to higher objective quality, similar price, and higher value. Gluten-free FOP/POP related to lower objective quality, higher price, and lower value. Kid's cereals were of lower objective quality, same price, and lower value compared to family and adult markets. The findings point to a disturbing tendency of companies to continue to produce lower quality products for the kids’ market, pricing them the same as high-quality products. The paper outlines strategies that marketers and policymakers can utilize to contribute to the increased objective quality and value of breakfast cereal products in the United States.

Keywords: cereals, certifications, front-of-package claims, consumer health.

Procedia PDF Downloads 110
4685 Conscious Intention-based Processes Impact the Neural Activities Prior to Voluntary Action on Reinforcement Learning Schedules

Authors: Xiaosheng Chen, Jingjing Chen, Phil Reed, Dan Zhang

Abstract:

Conscious intention can be a promising point cut to grasp consciousness and orient voluntary action. The current study adopted a random ratio (RR), yoked random interval (RI) reinforcement learning schedule instead of the previous highly repeatable and single decision point paradigms, aimed to induce voluntary action with the conscious intention that evolves from the interaction between short-range-intention and long-range-intention. Readiness potential (RP) -like-EEG amplitude and inter-trial-EEG variability decreased significantly prior to voluntary action compared to cued action for inter-trial-EEG variability, mainly featured during the earlier stage of neural activities. Notably, (RP) -like-EEG amplitudes decreased significantly prior to higher RI-reward rates responses in which participants formed a higher plane of conscious intention. The present study suggests the possible contribution of conscious intention-based processes to the neural activities from the earlier stage prior to voluntary action on reinforcement leanring schedule.

Keywords: Reinforcement leaning schedule, voluntary action, EEG, conscious intention, readiness potential

Procedia PDF Downloads 61
4684 Preparation and Properties of PP/EPDM Reinforced with Graphene

Authors: M. Haghnegahdar, G. Naderi, M. H. R. Ghoreishy

Abstract:

Polypropylene(PP)/Ethylene Propylene Diene Monomer (EPDM) samples (80/20) containing 0, 0.5, 1, 1.5, 2, 2.5, and 3 (expressed in mass fraction) graphene were prepared using melt compounding method to investigate microstructure, mechanical properties, and thermal stability as well as electrical resistance of samples. X-Ray diffraction data confirmed that graphene platelets are well dispersed in PP/EPDM. Mechanical properties such as tensile strength, impact strength and hardness demonstrated increasing trend by graphene loading which exemplifies substantial reinforcing nature of this kind of nano filler and it's good interaction with polymer chains. At the same time it is found that thermo-oxidative degradation of PP/EPDM nanocomposites is noticeably retarded with the increasing of graphene content. Electrical surface resistivity of the nanocomposite was dramatically changed by forming electrical percolation threshold and leads to change electrical behavior from insulator to semiconductor. Furthermore, these results were confirmed by scanning electron microscopy(SEM), dynamic mechanical thermal analysis (DMTA), and transmission electron microscopy (TEM).

Keywords: nanocomposite, graphene, microstructure, mechanical properties

Procedia PDF Downloads 315
4683 Implication of Multi-Walled Carbon Nanotubes on Polymer/MXene Nanocomposites

Authors: Mathias Aakyiir, Qunhui Zheng, Sherif Araby, Jun Ma

Abstract:

MXene nanosheets stack in polymer matrices, while multi-walled carbon nanotubes (MWCNTs) entangle themselves when used to form composites. These challenges are addressed in this work by forming MXene/MWCNT hybrid nanofillers by electrostatic self-assembly and developing elastomer/MXene/MWCNTs nanocomposites using a latex compounding method. In a 3-phase nanocomposite, MWCNTs serve as bridges between MXene nanosheets, leading to nanocomposites with well-dispersed nanofillers. The high aspect ratio of MWCNTs and the interconnection role of MXene serve as a basis for forming nanocomposites of lower percolation threshold of electrical conductivity from the hybrid fillers compared with the 2-phase composites containing either MXene or MWCNTs only. This study focuses on discussing into detail the interfacial interaction of nanofillers and the elastomer matrix and the outstanding mechanical and functional properties of the resulting nanocomposites. The developed nanocomposites have potential applications in the automotive and aerospace industries.

Keywords: elastomers, multi-walled carbon nanotubes, MXenes, nanocomposites

Procedia PDF Downloads 147
4682 Evaluation of the Rheological Properties of Bituminous Binders Modified with Biochars Obtained from Various Biomasses by Pyrolysis Method

Authors: Muhammed Ertuğrul Çeloğlu, Mehmet Yılmaz

Abstract:

In this study, apricot seed shell, walnut shell, and sawdust were chosen as biomass sources. The materials were sorted by using a sieve No. 50 and the sieved materials were subjected to pyrolysis process at 400 °C, resulting in three different biochar products. The resulting biochar products were added to the bitumen at three different rates (5%, 10% and 15%), producing modified bitumen. Penetration, softening point, rotation viscometer and dynamic shear rheometer (DSR) tests were conducted on modified binders. Thus the modified bitumen, which was obtained by using additives at 3 different rates obtained from biochar produced at 400 °C temperatures of 3 different biomass sources were compared and the effects of pyrolysis temperature and additive rates were evaluated. As a result of the conducted tests, it was determined that the rheology of the pure bitumen improved significantly as a result of the modification of the bitumen with the biochar. Additionally, with biochar additive, it was determined that the rutting parameter values obtained from softening point, viscometer and DSR tests were increased while the values in terms of penetration and phase angle decreased. It was also observed that the most effective biomass is sawdust while the least effective was ground apricot seed shell.

Keywords: rheology, biomass, pyrolysis, biochar

Procedia PDF Downloads 158
4681 Globally Attractive Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type

Authors: Jorge Gonzalez Camus, Carlos Lizama

Abstract:

In this work is proved the existence of at least one globally attractive mild solution to the Cauchy problem, for fractional evolution equation of neutral type, involving the fractional derivate in Caputo sense. An almost sectorial operator on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Hausdorff measure of noncompactness and fixed point theorems, specifically Darbo-type. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the analytic integral resolvent operator, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, each mild solution is globally attractive, a property that is desired in asymptotic behavior for that solution.

Keywords: attractive mild solutions, integral Volterra equations, neutral type equations, non-local in time equations

Procedia PDF Downloads 137
4680 GPRS Based Automatic Metering System

Authors: Constant Akama, Frank Kulor, Frederick Agyemang

Abstract:

All over the world, due to increasing population, electric power distribution companies are looking for more efficient ways of reading electricity meters. In Ghana, the prepaid metering system was introduced in 2007 to replace the manual system of reading which was fraught with inefficiencies. However, the prepaid system in Ghana is not capable of integration with online systems such as e-commerce platforms and remote monitoring systems. In this paper, we present a design framework for an automatic metering system that can be integrated with e-commerce platforms and remote monitoring systems. The meter was designed using ADE 7755 which reads the energy consumption and the reading is processed by a microcontroller connected to Sim900 General Packet Radio Service module containing a GSM chip provisioned with an Access Point Name. The system also has a billing server and a management server located at the premises of the utility company which communicate with the meter over a Virtual Private Network and GPRS. With this system, customers can buy credit online and the credit will be transferred securely to the meter. Also, when a fault is reported, the utility company can log into the meter remotely through the management server to troubleshoot the problem.

Keywords: access point name, general packet radio service, GSM, virtual private network

Procedia PDF Downloads 288
4679 Prospective Future of Frame Fire Tests

Authors: Chung-Hao Wu, Tung-Dju Lin, Ming-Chin Ho, Minehiro Nishiyama

Abstract:

This paper discusses reported fire tests of concrete beams and columns, future fire tests of beam/column frames, and an innovative concept for designing a beam/column furnace. The proposed furnace could be designed to maximize the efficiency of fire test procedures and minimize the cost of furnace construction and fuel consumption. ASTM E119 and ISO 834 standards were drafted based on prescriptive codes and have several weaknesses. The first involves a provision allowing the support regions of a test element to be protected from fire exposure. The second deals with the L/30 deflection end point instead of the structural end point (collapse) in order to protect the hydraulic rams from fire damage. Furthermore, designers commonly use the measured fire endurances of interior columns to assess fire ratings of edge and corner columns of the same building. The validity of such an engineering practice is theoretically unsound. Performance-Based Codes (PBC) require verification tests of structural frames including the beam/column joints to overcome these weaknesses but allow the use of element test data as reference only. In the last 30 years, PBC have gained global popularity because the innovative design and flexibility in achieving an ultimate performance goal.

Keywords: fire resistance, concrete structure, beam/column frame, fire tests

Procedia PDF Downloads 315
4678 Semi-Automatic Method to Assist Expert for Association Rules Validation

Authors: Amdouni Hamida, Gammoudi Mohamed Mohsen

Abstract:

In order to help the expert to validate association rules extracted from data, some quality measures are proposed in the literature. We distinguish two categories: objective and subjective measures. The first one depends on a fixed threshold and on data quality from which the rules are extracted. The second one consists on providing to the expert some tools in the objective to explore and visualize rules during the evaluation step. However, the number of extracted rules to validate remains high. Thus, the manually mining rules task is very hard. To solve this problem, we propose, in this paper, a semi-automatic method to assist the expert during the association rule's validation. Our method uses rule-based classification as follow: (i) We transform association rules into classification rules (classifiers), (ii) We use the generated classifiers for data classification. (iii) We visualize association rules with their quality classification to give an idea to the expert and to assist him during validation process.

Keywords: association rules, rule-based classification, classification quality, validation

Procedia PDF Downloads 418
4677 A Deletion-Cost Based Fast Compression Algorithm for Linear Vector Data

Authors: Qiuxiao Chen, Yan Hou, Ning Wu

Abstract:

As there are deficiencies of the classic Douglas-Peucker Algorithm (DPA), such as high risks of deleting key nodes by mistake, high complexity, time consumption and relatively slow execution speed, a new Deletion-Cost Based Compression Algorithm (DCA) for linear vector data was proposed. For each curve — the basic element of linear vector data, all the deletion costs of its middle nodes were calculated, and the minimum deletion cost was compared with the pre-defined threshold. If the former was greater than or equal to the latter, all remaining nodes were reserved and the curve’s compression process was finished. Otherwise, the node with the minimal deletion cost was deleted, its two neighbors' deletion costs were updated, and the same loop on the compressed curve was repeated till the termination. By several comparative experiments using different types of linear vector data, the comparison between DPA and DCA was performed from the aspects of compression quality and computing efficiency. Experiment results showed that DCA outperformed DPA in compression accuracy and execution efficiency as well.

Keywords: Douglas-Peucker algorithm, linear vector data, compression, deletion cost

Procedia PDF Downloads 228