Search results for: statistics of sales of small wind turbines in the United States
2019 Analyzing the Ancient Islamic Architectural Theories: Role of Geometric Proportionality as a Principle of Islamic Design
Authors: Vamsi G.
Abstract:
Majority of the modern-day structures have less aesthetical value with minimum requirements set by foreign tribes. Numerous elements of traditional architecture can be incorporated into modern designs using appropriate principles to improve and enhance the functionality, aesthetics, and usability of any space. This paper reviews the diminishing ancient values of the traditional Islamic architecture. By introducing them into the modern-day structures like commercial, residential and recreational spaces in at least the Islamic states, the functionality of those spaces can be improved. For this, aspects like space planning, aesthetics, scale, hierarchy, value, and patterns are to be experimented with modern day structures. Case studies of few ancient Islamic architectural marvels are done to elaborate the whole. A brief analysis of materials and execution strategies are also a part of this paper. The analysis is formulated and is ready to design or redesign spaces using traditional Islamic principles and Elements of design to improve the quality of the architecture of modern day structures by studying the ancient Islamic architectural theories. For this, sources from the history and evolution of this architecture have been studied. Also, elements and principles of design from case studies of various mosques, forts, tombs, and palaces have been tabulated. All this data accumulated, will help revive the elements decorated by ancient principles in functional and aesthetical ways. By this, one of the most astonishing architectural styles can be conserved, reinstalled into modern day buildings and remembered.Keywords: ancient architecture, architectural history, Islamic architecture, principles and elements
Procedia PDF Downloads 2132018 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning
Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga
Abstract:
Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter
Procedia PDF Downloads 2122017 Fault Tolerant Control System Using a Multiple Time Scale SMC Technique and a Geometric Approach
Authors: Ghodbane Azeddine, Saad Maarouf, Boland Jean-Francois, Thibeault Claude
Abstract:
This paper proposes a new design of an active fault-tolerant flight control system against abrupt actuator faults. This overall system combines a multiple time scale sliding mode controller for fault compensation and a geometric approach for fault detection and diagnosis. The proposed control system is able to accommodate several kinds of partial and total actuator failures, by using available healthy redundancy actuators. The overall system first estimates the correct fault information using the geometric approach. Then, and based on that, a new reconfigurable control law is designed based on the multiple time scale sliding mode technique for on-line compensating the effect of such faults. This approach takes advantages of the fact that there are significant difference between the time scales of aircraft states that have a slow dynamics and those that have a fast dynamics. The closed-loop stability of the overall system is proved using Lyapunov technique. A case study of the non-linear model of the F16 fighter, subject to the rudder total loss of control confirms the effectiveness of the proposed approach.Keywords: actuator faults, fault detection and diagnosis, fault tolerant flight control, sliding mode control, multiple time scale approximation, geometric approach for fault reconstruction, lyapunov stability
Procedia PDF Downloads 3702016 Tensor Deep Stacking Neural Networks and Bilinear Mapping Based Speech Emotion Classification Using Facial Electromyography
Authors: P. S. Jagadeesh Kumar, Yang Yung, Wenli Hu
Abstract:
Speech emotion classification is a dominant research field in finding a sturdy and profligate classifier appropriate for different real-life applications. This effort accentuates on classifying different emotions from speech signal quarried from the features related to pitch, formants, energy contours, jitter, shimmer, spectral, perceptual and temporal features. Tensor deep stacking neural networks were supported to examine the factors that influence the classification success rate. Facial electromyography signals were composed of several forms of focuses in a controlled atmosphere by means of audio-visual stimuli. Proficient facial electromyography signals were pre-processed using moving average filter, and a set of arithmetical features were excavated. Extracted features were mapped into consistent emotions using bilinear mapping. With facial electromyography signals, a database comprising diverse emotions will be exposed with a suitable fine-tuning of features and training data. A success rate of 92% can be attained deprived of increasing the system connivance and the computation time for sorting diverse emotional states.Keywords: speech emotion classification, tensor deep stacking neural networks, facial electromyography, bilinear mapping, audio-visual stimuli
Procedia PDF Downloads 2542015 [Keynote Speaker]: Enhancing the Performance of a Photovoltaic Module Using Different Cooling Methods
Authors: Ahmed Amine Hachicha
Abstract:
Temperature effect on the performance of a photovoltaic module is one of the main concern that face this renewable energy, especially in the hot arid region, e.g United Arab Emirates. Overheating of the PV modules reduces the open circuit voltage and the efficiency of the modules dramatically. In this work, water cooling is developed to enhance the performance of PV modules. Different scenarios are tested under UAE weather conditions: front, back and double cooling. A spraying system is used for the front cooling whether a direct contact water system is used for the back cooling. The experimental results are compared to a non-cooling module and the performance of the PV module is determined for different situations. A mathematical model is presented to estimate the theoretical performance and validate the experimental results with and without cooling. The experimental results show that the front cooling is more effective than the back cooling and may decrease the temperature of the PV module significantly.Keywords: PV cooling, solar energy, cooling methods, electrical efficiency, temperature effect
Procedia PDF Downloads 4972014 The Use of Random Set Method in Reliability Analysis of Deep Excavations
Authors: Arefeh Arabaninezhad, Ali Fakher
Abstract:
Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty
Procedia PDF Downloads 2682013 Theoretical Investigation of the Origin of Interfacial Ferromagnetism of (LaNiO₃)n/(CaMnO₃)m Superlattices
Authors: Jiwuer Jilili, Iogann Tolbatov, Mousumi U. Kahaly
Abstract:
Metal to insulator transition and interfacial magnetism of the LaNiO₃ based superlattice are main interest due to thickness dependent electronic response and tunable magnetic behavior. We investigate the structural, electronic, and magnetic properties of recently experimentally synthesized (LaNiO₃)n/(CaMnO₃)m superlattices with varying LaNiO₃ thickness using density functional theory. The effect of the on-site Coulomb interaction is discussed. In switching from zero to finite U value for Ni atoms, LaNiO₃ shows transitions from half-metallic to metallic character, while spinning ordering changes from paramagnetic to ferromagnetic (FM). For CaMnO₃, U < 3 eV on Mn atoms results in G-type anti-FM spin ordering whereas increasing U value yields FM ordering. In superlattices, metal to insulator transition was achieved with a reduction of LaNiO₃ thickness. The system with one layer of LaNiO₃ yields insulating character. Increasing LaNiO₃ to two layers and above results in the onset of the metallic character with a major contribution from Ni and Mn 3d eg states. Our results for interfacial ferromagnetism, induced Ni magnetic moments and novel antiferromagnetically coupled Ni atoms are consistent with the recent experimental findings. The possible origin of the emergent magnetism is proposed in terms of the exchange interaction and Anderson localization.Keywords: density functional theory, interfacial magnetism, metal-insulator transition, Ni magnetism.
Procedia PDF Downloads 2302012 Explanation of Sentinel-1 Sigma 0 by Sentinel-2 Products in Terms of Crop Water Stress Monitoring
Authors: Katerina Krizova, Inigo Molina
Abstract:
The ongoing climate change affects various natural processes resulting in significant changes in human life. Since there is still a growing human population on the planet with more or less limited resources, agricultural production became an issue and a satisfactory amount of food has to be reassured. To achieve this, agriculture is being studied in a very wide context. The main aim here is to increase primary production on a spatial unit while consuming as low amounts of resources as possible. In Europe, nowadays, the staple issue comes from significantly changing the spatial and temporal distribution of precipitation. Recent growing seasons have been considerably affected by long drought periods that have led to quantitative as well as qualitative yield losses. To cope with such kind of conditions, new techniques and technologies are being implemented in current practices. However, behind assessing the right management, there is always a set of the necessary information about plot properties that need to be acquired. Remotely sensed data had gained attention in recent decades since they provide spatial information about the studied surface based on its spectral behavior. A number of space platforms have been launched carrying various types of sensors. Spectral indices based on calculations with reflectance in visible and NIR bands are nowadays quite commonly used to describe the crop status. However, there is still the staple limit by this kind of data - cloudiness. Relatively frequent revisit of modern satellites cannot be fully utilized since the information is hidden under the clouds. Therefore, microwave remote sensing, which can penetrate the atmosphere, is on its rise today. The scientific literature describes the potential of radar data to estimate staple soil (roughness, moisture) and vegetation (LAI, biomass, height) properties. Although all of these are highly demanded in terms of agricultural monitoring, the crop moisture content is the utmost important parameter in terms of agricultural drought monitoring. The idea behind this study was to exploit the unique combination of SAR (Sentinel-1) and optical (Sentinel-2) data from one provider (ESA) to describe potential crop water stress during dry cropping season of 2019 at six winter wheat plots in the central Czech Republic. For the period of January to August, Sentinel-1 and Sentinel-2 images were obtained and processed. Sentinel-1 imagery carries information about C-band backscatter in two polarisations (VV, VH). Sentinel-2 was used to derive vegetation properties (LAI, FCV, NDWI, and SAVI) as support for Sentinel-1 results. For each term and plot, summary statistics were performed, including precipitation data and soil moisture content obtained through data loggers. Results were presented as summary layouts of VV and VH polarisations and related plots describing other properties. All plots performed along with the principle of the basic SAR backscatter equation. Considering the needs of practical applications, the vegetation moisture content may be assessed using SAR data to predict the drought impact on the final product quality and yields independently of cloud cover over the studied scene.Keywords: precision agriculture, remote sensing, Sentinel-1, SAR, water content
Procedia PDF Downloads 1252011 Analysis of Impact of Air Pollution over Megacity Delhi Due to Agricultural Biomass Burning in the Neighbouring States
Authors: Ankur P. Sati, Manju Mohan
Abstract:
The hazardous combination of smoke and pollutant gases, smog, is harmful for health. There are strong evidences that the Agricultural waste burning (AWB) in the Northern India leads to adverse air quality in Delhi and its surrounding regions. A severe smog episode was observed over Delhi, India during November 2012 which resulted in very low visibility and various respiratory problems. Very high values of pollutants (PM10 as high as 989 µg m-3, PM2.5 as high as 585 µg m-3 an NO2 as high as 540 µg m-3) were measured all over Delhi during the smog episode. Ultra Violet Aerosol Index (UVAI) from Aura satellite and Aerosol Optical Depth (AOD) are used in the present study along with the output trajectories from HYSPLIT model and the in-situ data. Satellite data also reveal that AOD, UVAI are always at its highest during the farmfires duration in Punjab region of India and the extent of these farmfires may be increasing. It is observed that during the smog episode all the AOD, UVAI, PM2.5 and PM10 values surpassed those of the Diwali period (one of the most polluted events in the city) by a considerable amount at all stations across Delhi. The parameters used from the remote sensing data and the ground based observations at various stations across Delhi are very well in agreement about the intensity of Smog episode. The analysis clearly shows that regional pollution can have greater contributions in deteriorating the air quality than the local under adverse meteorological conditions.Keywords: smog, farmfires, AOD, remote sensing
Procedia PDF Downloads 2452010 Water Dumpflood into Multiple Low-Pressure Gas Reservoirs
Authors: S. Lertsakulpasuk, S. Athichanagorn
Abstract:
As depletion-drive gas reservoirs are abandoned when there is insufficient production rate due to pressure depletion, waterflooding has been proposed to increase the reservoir pressure in order to prolong gas production. Due to high cost, water injection may not be economically feasible. Water dumpflood into gas reservoirs is a new promising approach to increase gas recovery by maintaining reservoir pressure with much cheaper costs than conventional waterflooding. Thus, a simulation study of water dumpflood into multiple nearly abandoned or already abandoned thin-bedded gas reservoirs commonly found in the Gulf of Thailand was conducted to demonstrate the advantage of the proposed method and to determine the most suitable operational parameters for reservoirs having different system parameters. A reservoir simulation model consisting of several thin-layered depletion-drive gas reservoirs and an overlying aquifer was constructed in order to investigate the performance of the proposed method. Two producers were initially used to produce gas from the reservoirs. One of them was later converted to a dumpflood well after gas production rate started to decline due to continuous reduction in reservoir pressure. The dumpflood well was used to flow water from the aquifer to increase pressure of the gas reservoir in order to drive gas towards producer. Two main operational parameters which are wellhead pressure of producer and the time to start water dumpflood were investigated to optimize gas recovery for various systems having different gas reservoir dip angles, well spacings, aquifer sizes, and aquifer depths. This simulation study found that water dumpflood can increase gas recovery up to 12% of OGIP depending on operational conditions and system parameters. For the systems having a large aquifer and large distance between wells, it is best to start water dumpflood when the gas rate is still high since the long distance between the gas producer and dumpflood well helps delay water breakthrough at producer. As long as there is no early water breakthrough, the earlier the energy is supplied to the gas reservoirs, the better the gas recovery. On the other hand, for the systems having a small or moderate aquifer size and short distance between the two wells, performing water dumpflood when the rate is close to the economic rate is better because water is more likely to cause an early breakthrough when the distance is short. Water dumpflood into multiple nearly-depleted or depleted gas reservoirs is a novel study. The idea of using water dumpflood to increase gas recovery has been mentioned in the literature but has never been investigated. This detailed study will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost and risk.Keywords: dumpflood, increase gas recovery, low-pressure gas reservoir, multiple gas reservoirs
Procedia PDF Downloads 4442009 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational
Authors: Marco Sewald
Abstract:
Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating
Procedia PDF Downloads 2212008 Depictions of Human Cannibalism and the Challenge They Pose to the Understanding of Animal Rights
Authors: Desmond F. Bellamy
Abstract:
Discourses about animal rights usually assume an ontological abyss between human and animal. This supposition of non-animality allows us to utilise and exploit non-humans, particularly those with commercial value, with little regard for their rights or interests. We can and do confine them, inflict painful treatments such as castration and branding, and slaughter them at an age determined only by financial considerations. This paper explores the way images and texts depicting human cannibalism reflect this deprivation of rights back onto our species and examines how this offers new perspectives on our granting or withholding of rights to farmed animals. The animals we eat – sheep, pigs, cows, chickens and a small handful of other species – are during processing de-animalised, turned into commodities, and made unrecognisable as formerly living beings. To do the same to a human requires the cannibal to enact another step – humans must first be considered as animals before they can be commodified or de-animalised. Different iterations of cannibalism in a selection of fiction and non-fiction texts will be considered: survivalism (necessitated by catastrophe or dystopian social collapse), the primitive savage of colonial discourses, and the inhuman psychopath. Each type of cannibalism shows alternative ways humans can be animalised and thereby dispossessed of both their human and animal rights. Human rights, summarised in the UN Universal Declaration of Human Rights as ‘life, liberty, and security of person’ are stubbornly denied to many humans, and are refused to virtually all farmed non-humans. How might this paradigm be transformed by seeing the animal victim replaced by an animalised human? People are fascinated as well as repulsed by cannibalism, as demonstrated by the upsurge of films on the subject in the last few decades. Cannibalism is, at its most basic, about envisaging and treating humans as objects: meat. It is on the dinner plate that the abyss between human and ‘animal’ is most challenged. We grasp at a conscious level that we are a species of animal and may become, if in the wrong place (e.g., shark-infested water), ‘just food’. Culturally, however, strong traditions insist that humans are much more than ‘just meat’ and deserve a better fate than torment and death. The billions of animals on death row awaiting human consumption would ask the same if they could. Depictions of cannibalism demonstrate in graphic ways that humans are animals, made of meat and that we can also be butchered and eaten. These depictions of us as having the same fleshiness as non-human animals reminds us that they have the same capacities for pain and pleasure as we do. Depictions of cannibalism, therefore, unconsciously aid in deconstructing the human/animal binary and give a unique glimpse into the often unnoticed repudiation of animal rights.Keywords: animal rights, cannibalism, human/animal binary, objectification
Procedia PDF Downloads 1382007 Fano-Resonance-Based Wideband Acoustic Metamaterials with Highly Efficient Ventilation
Authors: Xi-Wen Xiao, Tzy-Rong Lin, Chien-Hao Liu
Abstract:
Ventilated acoustic metamaterials have attracted considerable research attention due to their low-frequency absorptions and efficient fluid ventilations. In this research, a wideband acoustic metamaterial with auditory filtering ability and efficient ventilation capacity were proposed. In contrast to a conventional Fano-like resonator, a Fano-like resonator composed of a resonant unit and two nonresonant units with a large opening area of 68% for fluid passages was developed. In addition, the coupling mechanism to improve the narrow bandwidths of conventional Fano-resonance-based meta-materials was included. With a suitable design, the output sound waves of the resonant and nonresonant states were out of phase to achieve sound absorptions in the far fields. Therefore, three-element and five-element coupled Fano-like metamaterials were designed and simulated with the help of the finite element software to obtain the filtering fractional bandwidths of 42.5% and 61.8%, respectively. The proposed approach can be extended to multiple coupled resonators for obtaining ultra-wide bandwidths and can be implemented with 3D printing for practical applications. The research results are expected to be beneficial for sound filtering or noise reductions in duct applications and limited-volume spaces.Keywords: fano resonance, noise reduction, resonant coupling, sound filtering, ventilated acoustic metamaterial
Procedia PDF Downloads 1152006 An Assessment of Trace Heavy Metal Contamination of Some Edible Oils Regularly Marketed in Benue and Taraba States of Nigeria
Authors: Raphael Odoh, Obida J. Oko, Mary S. Dauda
Abstract:
The determination of Cd, Cr, Cu, Fe,Mn, Ni, Pb and Zn contents in edible oils (palm oil, ground-nut oil and soybean oil) bought from various markets of Benue and Taraba state were carried out with flame atomic absorption spectrophotometric technique. The method 3031 developed acid digestion of oils for metal analysis by atomic absorption or ICP spectrometry was used in the preparation of the edible oil samples for the determination of total metal content in this study. The overall results (µg/g) in palm oil sample ranged from 0.028-0.076, 0.035-0.092, 1.011-1.955, 2.101-4.892, 0.666-0.922, 0.054-0.095, 0.031-0.068 and 1.987-2.971 for Cd, Cr, Cu, Fe, Mn, Ni, Pb and Zn respectively, while in ground-nut oil the overall results ranged from 0.011-0.042, 0.011-0.052, 0.133-0.788, 1.789-2.511, 0.078-0.765, 0.045-0.092, 0.011-0.028 and 1.098-1.997 for Cd, Cr, Cu, Fe, Mn, Ni, Pb and Zn respectively. Of the heavy metals considered Cd and Ni showed the highest contamination in the soybean oil sample. The overall results in soybean oil samples ranged from 0.011-0.015, 0.017-0.032, 0.453-0.987, 1.789-2.511, 0.089-0.321, 0.011-0.016, 0.012-0.065 and 1.011-1.997 for Cd, Cr, Cu, Fe, Mn, Ni, Pb and Zn respectively. The concentration of Pb was the highest. The degree of contamination by each metal was estimated by the transfer factor. The transfer factors obtained for Cd, Cr, Cu, Fe, Mn, Ni, Pb and Zn in edible oils (palm oil, ground-nut oil and soybean oil) were 10.800, 16.500, 16.000, 18.813, 15.115, 14.230, 23.000 and 9.418 for Cd, Cr, Cu, Fe, Mn, Ni, Pb and Zn in palm oil, and 7.000, 12.500, 8.880, 11.333, 7.708, 10.833, 15.00 and 6.608 for Cd, Cr, Cu, Fe, Mn, Ni, Pb and Zn in ground-nut oil while for soybean oil the transfer factors were 13.000, 11.000, 7.642, 11.578, 4.486, 13.00, 12.333 and 4.412 for Cd, Cr, Cu, Fe, Mn, Ni, Pb and Zn respectively. The inter-element correlation was found among metals in edible oil samples using Pearson’s correlation co-efficient. There were positive and negative correlations among the metals determined. All Metals determined showed degree of contamination but concentrations lower than the USP specification.Keywords: Benue State, contamination, edible oils, heavy metals, markets, Taraba State
Procedia PDF Downloads 3232005 Analysis of Brownfield Soil Contamination Using Local Government Planning Data
Authors: Emma E. Hellawell, Susan J. Hughes
Abstract:
BBrownfield sites are currently being redeveloped for residential use. Information on soil contamination on these former industrial sites is collected as part of the planning process by the local government. This research project analyses this untapped resource of environmental data, using site investigation data submitted to a local Borough Council, in Surrey, UK. Over 150 site investigation reports were collected and interrogated to extract relevant information. This study involved three phases. Phase 1 was the development of a database for soil contamination information from local government reports. This database contained information on the source, history, and quality of the data together with the chemical information on the soil that was sampled. Phase 2 involved obtaining site investigation reports for development within the study area and extracting the required information for the database. Phase 3 was the data analysis and interpretation of key contaminants to evaluate typical levels of contaminants, their distribution within the study area, and relating these results to current guideline levels of risk for future site users. Preliminary results for a pilot study using a sample of the dataset have been obtained. This pilot study showed there is some inconsistency in the quality of the reports and measured data, and careful interpretation of the data is required. Analysis of the information has found high levels of lead in shallow soil samples, with mean and median levels exceeding the current guidance for residential use. The data also showed elevated (but below guidance) levels of potentially carcinogenic polyaromatic hydrocarbons. Of particular concern from the data was the high detection rate for asbestos fibers. These were found at low concentrations in 25% of the soil samples tested (however, the sample set was small). Contamination levels of the remaining chemicals tested were all below the guidance level for residential site use. These preliminary pilot study results will be expanded, and results for the whole local government area will be presented at the conference. The pilot study has demonstrated the potential for this extensive dataset to provide greater information on local contamination levels. This can help inform regulators and developers and lead to more targeted site investigations, improving risk assessments, and brownfield development.Keywords: Brownfield development, contaminated land, local government planning data, site investigation
Procedia PDF Downloads 1392004 Causes and Effects of the 2012 Flood Disaster on Affected Communities in Nigeria
Authors: Abdulquadri Ade Bilau, Richard Ajayi Jimoh, Adejoh Amodu Adaji
Abstract:
The increasing exposures to natural hazards have continued to severely impair on the built environment causing huge fatalities, mass damage and destruction of housing and civil infrastructure while leaving psychosocial impacts on affected communities. The 2012 flood disaster in Nigeria which affected over 7 million inhabitants in 30 of the 36 states resulted in 363 recorded fatalities with about 600,000 houses and a number of civil infrastructure damaged or destroyed. In Kogi State, over 500 thousand people were displaced in 9 out of the 21 local government affected while Ibaji and Lokoja local governments were worst hit. This study identifies the causes and 2012 flood disasters and its effect on housing and livelihood. Personal observation and questionnaire survey were instruments used in carrying out the study and data collected were analysed using descriptive statistical tool. Findings show that the 2012 flood disaster was aided by the gap in hydrological data, sudden dam failure, and inadequate drainage capacity to reduce flood risk. The study recommends that communities residing along the river banks in Lokoja and Ibaji LGAs must be adequately educated on their exposure to flood hazard and mitigation and risk reduction measures such as construction of adequate drainage channel are constructed in affected communities.Keywords: flood, hazards, housing, risk reduction, vulnerability
Procedia PDF Downloads 2642003 Structural Performance of Composite Steel and Concrete Beams
Authors: Jakub Bartus
Abstract:
In general, composite steel and concrete structures present an effective structural solution utilizing full potential of both materials. As they have a numerous advantages on the construction side, they can reduce greatly the overall cost of construction, which is the main objective of the last decade, highlighted by the current economic and social crisis. The study represents not only an analysis of composite beams’ behaviour having web openings but emphasizes the influence of these openings on the total strain distribution at the level of steel bottom flange as well. The major investigation was focused on a change of structural performance with respect to various layouts of openings. Examining this structural modification, an improvement of load carrying capacity of composite beams was a prime object. The study is devided into analytical and numerical part. The analytical part served as an initial step into the design process of composite beam samples, in which optimal dimensions and specific levels of utilization in individual stress states were taken into account. The numerical part covered description of imposed structural issue in a form of a finite element model (FEM) using strut and shell elements accounting for material non-linearities. As an outcome, a number of conclusions were drawn describing and explaining an effect of web opening presence on the structural performance of composite beams.Keywords: composite beam, web opening, steel flange, totalstrain, finite element analysis
Procedia PDF Downloads 682002 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 2762001 The Analysis of Computer Crimes Act 1997 in the Circumvention and Prevention of Computer Crimes in Malaysia
Authors: Nooraneda Mutalip Laidey
Abstract:
Computer Crimes Act 1997 (CCA 1997) was conceded by Malaysia’s legislative body in 1997 and the Act was enforced in June 2000. The purpose of CCA 1997 is to provide for offences related to misuse of computers such as hacking, cracking and phishing. CCA 1997 was modelled after United Kingdom’s Computer Misuses Act 1990 as a response to the emerging computer crimes. This legislation is divided into three parts and 12 Sections. The first part outlines preliminary matters that include short title and relevant definitions, second part provides for the offenses related to misuse of computers and specifies penalties for each offences, and the last part deals with ancillary provisions such as jurisdictional and investigational issues of cybercrime. The main objective of this paper is to discuss the development of computer crimes and its deterrence in Malaysia. Specific sections of CCA 1997 will be analysed in details and detail assessment on the prevention and prosecution of computer crimes in Malaysia will be accessed to determine whether CCA 1997 is so far adequate in preventing computer crimes in Malaysia.Keywords: computer, computer crimes, CCA 1997, circumvention, deterrence
Procedia PDF Downloads 3442000 Visualization of Chinese Genealogies with Digital Technology: A Case of Genealogy of Wu Clan in the Village of Gaoqian
Authors: Huiling Feng, Jihong Liang, Xiaodong Gong, Yongjun Xu
Abstract:
Recording history is a tradition in ancient China. A record of a dynasty makes a dynastic history; a record of a locality makes a chorography, and a record of a clan makes a genealogy – the three combined together depicts a complete national history of China both macroscopically and microscopically, with genealogy serving as the foundation. Genealogy in ancient China traces back to a family tree or pedigrees in the early and medieval historical times. After Song Dynasty, the civilian society gradually emerged, and the Emperor had to allow people from the same clan to live together and hold the ancestor worship activities, thence compilation of genealogy became popular in the society. Since then, genealogies, regarded as important as ancestor and religious temples in a traditional villages even today, have played a primary role in identification of a clan and maintain local social order. Chinese genealogies are rich in their documentary materials. Take the Genealogy of Wu Clan in Gaoqian as an example. Gaoqian is a small village in Xianju County of Zhejiang Province. The Genealogy of Wu Clan in Gaoqian is composed of a whole set of materials from Foreword to Family Trees, Family Rules, Family Rituals, Family Graces and Glories, Ode to An ancestor’s Portrait, Manual for the Ancestor Temple, documents for great men in the clan, works written by learned men in the clan, the contracts concerning landed property, even notes on tombs and so on. Literally speaking, the genealogy, with detailed information from every aspect recorded in stylistic rules, is indeed the carrier of the entire culture of a clan. However, due to their scarcity in number and difficulties in reading, genealogies seldom fall into the horizons of common people. This paper, focusing on the case of the Genealogy of Wu Clan in the Village of Gaoqian, intends to reproduce a digital Genealogy by use of ICTs, through an in-depth interpretation of the literature and field investigation in Gaoqian Village. Based on this, the paper goes further to explore the general methods in transferring physical genealogies to digital ones and ways in visualizing the clanism culture embedded in the genealogies with a combination of digital technologies such as software in family trees, multimedia narratives, animation design, GIS application and e-book creators.Keywords: clanism culture, multimedia narratives, genealogy of Wu Clan, GIS
Procedia PDF Downloads 2221999 Turkey-Syria Relations between 2002-2011 from the Perspective of Social Construction
Authors: Didem Aslantaş
Abstract:
In this study, the reforms carried out by the Justice and Development Party, which came to power in 2002, and how the foreign policy understanding it transformed reflected on the relations with Syria will be analyzed from the social constructivist theory. Contrary to the increasing security concerns of the states after the September 11 attacks, the main problem of the research is how the relations between Syria and Turkey developed and how they progressed in non-security dimensions. In order to find an answer to this question, the basic assumptions of the constructivist theory will be used. Since there is a limited number of studies in the literature, a comparative analysis of the Adana Consensus and the Cooperation Agreement between the Republic of Turkey and the Syrian Arab Republic, and the Joint Cooperation Agreement Against Terrorism and Terrorist Organizations will be included. In order to answer the main problem of the research and to support the arguments, document and archive scanning methods from qualitative research methods will be used. In the first part of the study, what the social constructivist theory is and its basic assumptions are explained, while in the second part, Turkey-Syria relations between 2002-2011 are included. In the third and last part, the relations between the two countries will be tried to be read through social constructivism by referring to the foreign policy features of the Ak Party period.Keywords: Social Constructivist Theory, foreign policy analysis, Justice and Development Party, Syria
Procedia PDF Downloads 831998 Tempo-Spatial Pattern of Progress and Disparity in Child Health in Uttar Pradesh, India
Authors: Gudakesh Yadav
Abstract:
Uttar Pradesh is one of the poorest performing states of India in terms of child health. Using data from the three round of NFHS and two rounds of DLHS, this paper attempts to examine tempo-spatial change in child health and care practices in Uttar Pradesh and its regions. Rate-ratio, CI, multivariate, and decomposition analysis has been used for the study. Findings demonstrate that child health care practices have improved over the time in all regions of the state. However; western and southern region registered the lowest progress in child immunization. Nevertheless, there is no decline in prevalence of diarrhea and ARI over the period, and it remains critically high in the western and southern region. These regions also poorly performed in giving ORS, diarrhoea and ARI treatment. Public health services are least preferred for diarrhoea and ARI treatment. Results from decomposition analysis reveal that rural area, mother’s illiteracy and wealth contributed highest to the low utilization of the child health care practices consistently over the period of time. The study calls for targeted intervention for vulnerable children to accelerate child health care service utilization. Poor performing regions should be targeted and routinely monitored on poor child health indicators.Keywords: Acute Respiratory Infection (ARI), decomposition, diarrhea, inequality, immunization
Procedia PDF Downloads 3001997 Continued usage of Wearable FItness Technology: An Extended UTAUT2 Model Perspective
Authors: Rasha Elsawy
Abstract:
Aside from the rapid growth of global information technology and the Internet, another key trend is the swift proliferation of wearable technologies. The future of wearable technologies is very bright as an emerging revolution in this technological world. Beyond this, individual continuance intention toward IT is an important area that drew academics' and practitioners' attention. The literature review exhibits that continuance usage is an important concern that needs to be addressed for any technology to be advantageous and for consumers to succeed. However, consumers noticeably abandon their wearable devices soon after purchase, losing all subsequent benefits that can only be achieved through continued usage. Purpose-This thesis aims to develop an integrated model designed to explain and predict consumers' behavioural intention(BI) and continued use (CU) of wearable fitness technology (WFT) to identify the determinants of the CU of technology. Because of this, the question arises as to whether there are differences between technology adoption and post-adoption (CU) factors. Design/methodology/approach- The study employs the unified theory of acceptance and use of technology2 (UTAUT2), which has the best explanatory power, as an underpinning framework—extending it with further factors, along with user-specific personal characteristics as moderators. All items will be adapted from previous literature and slightly modified according to the WFT/SW context. A longitudinal investigation will be carried out to examine the research model, wherein a survey will include these constructs involved in the conceptual model. A quantitative approach based on a questionnaire survey will collect data from existing wearable technology users. Data will be analysed using the structural equation modelling (SEM) method based on IBM SPSS statistics and AMOS 28.0. Findings- The research findings will provide unique perspectives on user behaviour, intention, and actual continuance usage when accepting WFT. Originality/value- Unlike previous works, the current thesis comprehensively explores factors that affect consumers' decisions to continue using wearable technology. That is influenced by technological/utilitarian, affective, emotional, psychological, and social factors, along with the role of proposed moderators. That novel research framework is proposed by extending the UTAUT2 model with additional contextual variables classified into Performance Expectancy, Effort Expectancy, Social Influence (societal pressure regarding body image), Facilitating Conditions, Hedonic Motivation (to be split up into two concepts: perceived enjoyment and perceived device annoyance), Price value, and Habit-forming techniques; adding technology upgradability as determinants of consumers' behavioural intention and continuance usage of Information Technology (IT). Further, using personality traits theory and proposing relevant user-specific personal characteristics (openness to technological innovativeness, conscientiousness in health, extraversion, neuroticism, and agreeableness) to moderate the research model. Thus, the present thesis obtains a more convincing explanation expected to provide theoretical foundations for future emerging IT (such as wearable fitness devices) research from a behavioural perspective.Keywords: wearable technology, wearable fitness devices/smartwatches, continuance use, behavioural intention, upgradability, longitudinal study
Procedia PDF Downloads 1131996 Emerging Film Makers in Tamil Cinema Liberated by Digital Media
Authors: Valarmathi Subramaniam
Abstract:
Ever since the first Indian feature film was produced and released by Shri Dada Saheb Phalke in the year 1931, the Indian Film Industry has grown leaps and bounds. The Indian Film Industry stands as the largest film industry in the world, and it produces more than a thousand films every year with investments and revenues worth several billion rupees. As per the official report published by UNESCO in the year 2017 on their website, it states that in the year 2015, India has produced one thousand nine hundred and seven feature films using digital technology. Not only is the cinema adapted to digital technologies, but the digital technologies also opened up avenues for talents to enter the cinema industry. This paper explores such talents who have emerged in the film industry without any background, neither academic nor from their family background, but holding digital media as their weapon. The research involves two variants of filmmaking technology – Celluloid and Digital. The study used a selective sampling of films that were released from the year 2020-to 2022. The sample has been organized, resulting in popular and fresh talents in the editing phase of filmmaking. There were 48 editors, of which 12 editors were not popular and 6 of them were fresh into the film without any background. Interview methods were used to collect data on what helped them to get into the industry straight. The study found that the digital medium and the digital technology enabled them to get into the film industry.Keywords: digital media, digital in cinema, digital era talents, emerging new talents
Procedia PDF Downloads 1171995 Dielectric Study of Lead-Free Double Perovskite Structured Polycrystalline BaFe0.5Nb0.5O3 Material
Authors: Vijay Khopkar, Balaram Sahoo
Abstract:
Material with high value of dielectric constant has application in the electronics devices. Existing lead based materials have issues such as toxicity and problem with synthesis procedure. Double perovskite structured barium iron niobate (BaFe0.5Nb0.5O3, BFN) is the lead-free material, showing a high value of dielectric constant. Origin of high value of the dielectric constant in BFN is not clear. We studied the dielectric behavior of polycrystalline BFN sample over wide temperature and frequency range. A BFN sample synthesis by conventional solid states reaction method and phase pure dens pellet was used for dielectric study. The SEM and TEM study shows the presence of grain and grain boundary region. The dielectric measurement was done between frequency range of 40 Hz to 5 MHz and temperature between 20 K to 500 K. At 500 K temperature and lower frequency, there observed high value of dielectric constant which decreases with increase in frequency. The dipolar relaxation follows non-Debye type polarization with relaxation straight of 3560 at room temperature (300 K). Activation energy calculated from the dielectric and modulus formalism found to be 17.26 meV and 2.74 meV corresponds to the energy required for the motion of Fe3+ and Nb5+ ions within the oxygen octahedra. Our study shows that BFN is the order disorder type ferroelectric material.Keywords: barium iron niobate, dielectric, ferroelectric, non-Debye
Procedia PDF Downloads 1371994 Socio-Economic Determinants of Physical Activity of Non-Manual Workers, Including the Early Senior Group, from the City of Wroclaw in Poland
Authors: Daniel Puciato, Piotr Oleśniewicz, Julita Markiewicz-Patkowska, Krzysztof Widawski, Michał Rozpara, Władysław Mynarski, Agnieszka Gawlik, Małgorzata Dębska, Soňa Jandová
Abstract:
Physical activity as a part of people’s everyday life reduces the risk of many diseases, including those induced by lifestyle, e.g. obesity, type 2 diabetes, osteoporosis, coronary heart disease, degenerative arthritis, and certain types of cancer. That refers particularly to professionally active people, including the early senior group working on non-manual positions. The aim of the study is to evaluate the relationship between physical activity and the socio-economic status of non-manual workers from Wroclaw—one of the biggest cities in Poland, a model setting for such investigations in this part of Europe. The crucial problem in the research is to find out the percentage of respondents who meet the health-related recommendations of the World Health Organization (WHO) concerning the volume, frequency, and intensity of physical activity, as well as to establish if the most important socio-economic factors, such as gender, age, education, marital status, per capita income, savings and debt, determine the compliance with the WHO physical activity recommendations. During the research, conducted in 2013, 1,170 people (611 women and 559 men) aged 21–60 years were examined. A diagnostic poll method was applied to collect the data. Physical activity was measured with the use of the short form of the International Physical Activity Questionnaire with extended socio-demographic questions, i.e. concerning gender, age, education, marital status, income, savings or debts. To evaluate the relationship between physical activity and selected socio-economic factors, logistic regression was used (odds ratio statistics). Statistical inference was conducted on the adopted ex ante probability level of p<0.05. The majority of respondents met the volume of physical effort recommended for health benefits. It was particularly noticeable in the case of the examined men. The probability of compliance with the WHO physical activity recommendations was highest for workers aged 21–30 years with secondary or higher education who were single, received highest incomes and had savings. The results indicate the relations between physical activity and socio-economic status in the examined women and men. People with lower socio-economic status (e.g. manual workers) are physically active primarily at work, whereas those better educated and wealthier implement physical effort primarily in their leisure time. Among the investigated subjects, the youngest group of non-manual workers have the best chances to meet the WHO standards of physical activity. The study also confirms that secondary education has a positive effect on the public awareness on the role of physical activity in human life. In general, the analysis of the research indicates that there is a relationship between physical activity and some socio-economic factors of the respondents, such as gender, age, education, marital status, income per capita, and the possession of savings. Although the obtained results cannot be applied for the general population, they show some important trends that will be verified in subsequent studies conducted by the authors of the paper.Keywords: IPAQ, nonmanual workers, physical activity, socioeconomic factors, WHO
Procedia PDF Downloads 5351993 Women's Pathways to Prison in Thailand
Authors: Samantha Jeffries, Chontit Chuenurah
Abstract:
Thailand incarcerates the largest number of women and has the highest female incarceration rate in South East Asia. Since the 1990s, there has been a substantial increase in the number, rate and proportion of women imprisoned. Thailand places a high priority on the gender specific contexts out of which offending arises and the different needs of women in the criminal justice system. This is manifested in work undertaken to guide the development of the United Nations Rules for the Treatment of Women Prisoners and Non-Custodial Measures for Women Offenders (the Bangkok Rules); adopted by the United Nations General Assembly in 2010. The Bangkok Rules make a strong statement about Thailand’s recognition of and commitment to the fair and equitable treatment of women throughout their contact with the criminal justice system including at sentencing and in prison. This makes the comparatively high use of imprisonment for women in Thailand particularly concerning and raises questions about the relationship between gender, crime and criminal justice. While there is an extensive body of research in Western jurisdictions exploring women’s pathways to prison, there is a relative dearth of methodologically robust research examining the possible gendered circumstances leading to imprisonment in Thailand. In this presentation, we will report preliminary findings from a qualitative study of women’s pathways to prison in Thailand. Our research aims were to ascertain: 1) the type, frequency, and context of criminal behavior that led to women’s incarceration, 2) women’s experiences of the criminal justice system, 3) the broader life experiences and circumstances that led women to prison in Thailand. In-depth life history interviews (n=77) were utilized to gain a comprehensive understanding of women’s journeys into prison. The interview schedule was open-ended consisting of prisoner responses to broad discussion topics. This approach provided women with the opportunity to describe significant experiences in their lives, to bring together distinct chronologies of events, and to analyze links between their varied life experiences, offending, and incarceration. Analyses showed that women’s journey’s to prison take one of eight pathways which tentatively labelled as follows, the: 1) harmed and harming pathway, 2) domestic/family violence victimization pathway, 3) drug connected pathway, 4) street woman pathway, 5) economically motivated pathway, 6) jealousy anger and/or revenge pathway, 7) naivety pathway, 8) unjust and/or corrupted criminal justice pathway. Each will be fully discussed during the presentation. This research is significant because it is the first in-depth methodologically robust exploration of women’s journeys to prison in Thailand and one of a few studies to explore gendered pathways outside of western contexts. Understanding women’s pathways into Thailand’s prisons is crucial to the development of effective planning, policy and program responses not only while women are in prison but also post-release. To best meet women’s needs in prison and effectively support their reintegration, we must have a comprehensive understanding of who these women are, what offenses they commit, the reasons that trigger their confrontations with the criminal justice system and the impact of the criminal justice system on them.Keywords: pathways, prison, women, Thailand
Procedia PDF Downloads 2461992 In vitro and in vivo Infectivity of Coxiella burnetii Strains from French Livestock
Authors: Joulié Aurélien, Jourdain Elsa, Bailly Xavier, Gasqui Patrick, Yang Elise, Leblond Agnès, Rousset Elodie, Sidi-Boumedine Karim
Abstract:
Q fever is a worldwide zoonosis caused by the gram-negative obligate intracellular bacterium Coxiella burnetii. Following the recent outbreaks in the Netherlands, a hyper virulent clone was found to be the cause of severe human cases of Q fever. In livestock, Q fever clinical manifestations are mainly abortions. Although the abortion rates differ between ruminant species, C. burnetii’s virulence remains understudied, especially in enzootic areas. In this study, the infectious potential of three C. burnetii isolates collected from French farms of small ruminants were compared to the reference strain Nine Mile (in phase II and in an intermediate phase) using an in vivo (CD1 mice) model. Mice were challenged with 105 live bacteria discriminated by propidium monoazide-qPCR targeting the icd-gene. After footpad inoculation, spleen and popliteal lymph node were harvested at 10 days post-inoculation (p.i). The strain invasiveness in spleen and popliteal nodes was assessed by qPCR assays targeting the icd-gene. Preliminary results showed that the avirulent strains (in phase 2) failed to pass the popliteal barrier and then to colonize the spleen. This model allowed a significant differentiation between strain’s invasiveness on biological host and therefore identifying distinct virulence profiles. In view of these results, we plan to go further by testing fifteen additional C. burnetii isolates from French farms of sheep, goat and cattle by using the above-mentioned in vivo model. All 15 strains display distant MLVA (multiple-locus variable-number of tandem repeat analysis) genotypic profiles. Five of the fifteen isolates will bee also tested in vitro on ovine and bovine macrophage cells. Cells and supernatants will be harvested at day1, day2, day3 and day6 p.i to assess in vitro multiplication kinetics of strains. In conclusion, our findings might help the implementation of surveillance of virulent strains and ultimately allow adapting prophylaxis measures in livestock farms.Keywords: Q fever, invasiveness, ruminant, virulence
Procedia PDF Downloads 3611991 For a Poetic Clinic: Experimentations at Risk on the Images in Performances
Authors: Juliana Bom-Tempo
Abstract:
The proposed composition occurs between images, performances, clinics and philosophies. For this enterprise we depart for what is not known beforehand, so with a question as a compass: "would it be in the creation, production and implementation of images in a performance a 'when' for the event of a poetic clinic?” In light of this, there are, in order to think a 'when' of the event of a poetic clinic, images in performances created, produced and executed in partnerships with the author of this text. Faced with this composition, we built four indicators to find spatiotemporal coordinates that would spot that "when", namely: risk zones; the mobilizations of the signs; the figuring of the flesh and an education of the affections. We dealt with the images in performances; Crútero; Flesh; Karyogamy and the risk of abortion; Egg white; Egg-mouth; Islands, threads, words ... germs; Egg-Mouth-Debris, taken as case studies, by engendering risks areas to promote individuations, which never actualize thoroughly, thus always something of pre-individual and also individuating a environment; by mobilizing the signs territorialized by the ordinary, causing them to vary the language and the words of order dictated by the everyday in other compositions of sense, other machinations; by generating a figure of flesh, disarranging the bodies, isolating them in the production of a ground force that causes the body to leak out and undo the functionalities of the organs; and, finally, by producing an education of affections, by placing the perceptions in becoming and disconnecting the visible in the production of small deserts that call for the creation of a people yet to come. The performance is processed as a problematizing of the images fixed by the ordinary, producing gestures that precipitate the individuation of images in performance, strange to the configurations that gather bodies and spaces in what we call common. Lawrence proposes to think of "people" who continually use umbrellas to protect themselves from chaos. These have the function of wrapping up the chaos in visions that create houses, forms and stabilities; they paint a sky at the bottom of the umbrella, where people march and die. A chaos, where people live and wither. Pierce the umbrella for a desire of chaos; a poet puts himself as an enemy of the convention, to be able to have an image of chaos and a little sun that burns his skin. The images in performances presented, thereby, were moving in search for the power of producing a spatio-temporal "when" putting the territories in risk areas, mobilizing the signs that format the day-to-day, opening the bodies to a disorganization and the production of an education of affections for the event of a poetic clinic.Keywords: Experimentations , Images in Performances, Poetic Clinic, Risk
Procedia PDF Downloads 1141990 Arc Interruption Design for DC High Current/Low SC Fuses via Simulation
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
This report summarizes a simulation-based approach to estimate the current interruption behavior of a fuse element utilized in a DC network protecting battery banks under different stresses. Due to internal resistance of the battries, the short circuit current in very close to the nominal current, and it makes the fuse designation tricky. The base configuration considered in this report consists of five fuse units in parallel. The simulations are performed using a multi-physics software package, COMSOL® 5.6, and the necessary material parameters have been calculated using two other software packages.The first phase of the simulation starts with the heating of the fuse elements resulted from the current flow through the fusing element. In this phase, the heat transfer between the metallic strip and the adjacent materials results in melting and evaporation of the filler and housing before the aluminum strip is evaporated and the current flow in the evaporated strip is cut-off, or an arc is eventually initiated. The initiated arc starts to expand, so the entire metallic strip is ablated, and a long arc of around 20 mm is created within the first 3 milliseconds after arc initiation (v_elongation = 6.6 m/s. The final stage of the simulation is related to the arc simulation and its interaction with the external circuitry. Because of the strong ablation of the filler material and venting of the arc caused by the melting and evaporation of the filler and housing before an arc initiates, the arc is assumed to burn in almost pure ablated material. To be able to precisely model this arc, one more step related to the derivation of the transport coefficients of the plasma in ablated urethane was necessary. The results indicate that an arc current interruption, in this case, will not be achieved within the first tens of milliseconds. In a further study, considering two series elements, the arc was interrupted within few milliseconds. A very important aspect in this context is the potential impact of many broken strips parallel to the one where the arc occurs. The generated arcing voltage is also applied to the other broken strips connected in parallel with arcing path. As the gap between the other strips is very small, a large voltage of a few hundred volts generated during the current interruption may eventually lead to a breakdown of another gap. As two arcs in parallel are not stable, one of the arcs will extinguish, and the total current will be carried by one single arc again. This process may be repeated several times if the generated voltage is very large. The ultimate result would be that the current interruption may be delayed.Keywords: DC network, high current / low SC fuses, FEM simulation, paralle fuses
Procedia PDF Downloads 66