Search results for: cardiac output
310 Quantum Information Scrambling and Quantum Chaos in Silicon-Based Fermi-Hubbard Quantum Dot Arrays
Authors: Nikolaos Petropoulos, Elena Blokhina, Andrii Sokolov, Andrii Semenov, Panagiotis Giounanlis, Xutong Wu, Dmytro Mishagli, Eugene Koskin, Robert Bogdan Staszewski, Dirk Leipold
Abstract:
We investigate entanglement and quantum information scrambling (QIS) by the example of a many-body Extended and spinless effective Fermi-Hubbard Model (EFHM and e-FHM, respectively) that describes a special type of quantum dot array provided by Equal1 labs silicon-based quantum computer. The concept of QIS is used in the framework of quantum information processing by quantum circuits and quantum channels. In general, QIS is manifest as the de-localization of quantum information over the entire quantum system; more compactly, information about the input cannot be obtained by local measurements of the output of the quantum system. In our work, we will first make an introduction to the concept of quantum information scrambling and its connection with the 4-point out-of-time-order (OTO) correlators. In order to have a quantitative measure of QIS we use the tripartite mutual information, in similar lines to previous works, that measures the mutual information between 4 different spacetime partitions of the system and study the Transverse Field Ising (TFI) model; this is used to quantify the dynamical spreading of quantum entanglement and information in the system. Then, we investigate scrambling in the quantum many-body Extended Hubbard Model with external magnetic field Bz and spin-spin coupling J for both uniform and thermal quantum channel inputs and show that it scrambles for specific external tuning parameters (e.g., tunneling amplitudes, on-site potentials, magnetic field). In addition, we compare different Hilbert space sizes (different number of qubits) and show the qualitative and quantitative differences in quantum scrambling as we increase the number of quantum degrees of freedom in the system. Moreover, we find a "scrambling phase transition" for a threshold temperature in the thermal case, that is, the temperature of the model that the channel starts to scramble quantum information. Finally, we make comparisons to the TFI model and highlight the key physical differences between the two systems and mention some future directions of research.Keywords: condensed matter physics, quantum computing, quantum information theory, quantum physics
Procedia PDF Downloads 99309 Performance Evaluation of Routing Protocols in Vehicular Adhoc Networks
Authors: Salman Naseer, Usman Zafar, Iqra Zafar
Abstract:
This study explores the implication of Vehicular Adhoc Network (VANET) - in the rural and urban scenarios that is one domain of Mobile Adhoc Network (MANET). VANET provides wireless communication between vehicle to vehicle and also roadside units. The Federal Commission Committee of United States of American has been allocated 75 MHz of the spectrum band in the 5.9 GHz frequency range for dedicated short-range communications (DSRC) that are specifically designed to enhance any road safety applications and entertainment/information applications. There are several vehicular related projects viz; California path, car 2 car communication consortium, the ETSI, and IEEE 1609 working group that have already been conducted to improve the overall road safety or traffic management. After the critical literature review, the selection of routing protocols is determined, and its performance was well thought-out in the urban and rural scenarios. Numerous routing protocols for VANET are applied to carry out current research. Its evaluation was conceded with the help of selected protocols through simulation via performance metric i.e. throughput and packet drop. Excel and Google graph API tools are used for plotting the graphs after the simulation results in order to compare the selected routing protocols which result with each other. In addition, the sum of the output from each scenario was computed to undoubtedly present the divergence in results. The findings of the current study present that DSR gives enhanced performance for low packet drop and high throughput as compared to AODV and DSDV in an urban congested area and in rural environments. On the other hand, in low-density area, VANET AODV gives better results as compared to DSR. The worth of the current study may be judged as the information exchanged between vehicles is useful for comfort, safety, and entertainment. Furthermore, the communication system performance depends on the way routing is done in the network and moreover, the routing of the data based on protocols implement in the network. The above-presented results lead to policy implication and develop our understanding of the broader spectrum of VANET.Keywords: AODV, DSDV, DSR, Adhoc network
Procedia PDF Downloads 286308 An Automatic Large Classroom Attendance Conceptual Model Using Face Counting
Authors: Sirajdin Olagoke Adeshina, Haidi Ibrahim, Akeem Salawu
Abstract:
large lecture theatres cannot be covered by a single camera but rather by a multicamera setup because of their size, shape, and seating arrangements. Although, classroom capture is achievable through a single camera. Therefore, a design and implementation of a multicamera setup for a large lecture hall were considered. Researchers have shown emphasis on the impact of class attendance taken on the academic performance of students. However, the traditional method of carrying out this exercise is below standard, especially for large lecture theatres, because of the student population, the time required, sophistication, exhaustiveness, and manipulative influence. An automated large classroom attendance system is, therefore, imperative. The common approach in this system is face detection and recognition, where known student faces are captured and stored for recognition purposes. This approach will require constant face database updates due to constant changes in the facial features. Alternatively, face counting can be performed by cropping the localized faces on the video or image into a folder and then count them. This research aims to develop a face localization-based approach to detect student faces in classroom images captured using a multicamera setup. A selected Haar-like feature cascade face detector trained with an asymmetric goal to minimize the False Rejection Rate (FRR) relative to the False Acceptance Rate (FAR) was applied on Raspberry Pi 4B. A relationship between the two factors (FRR and FAR) was established using a constant (λ) as a trade-off between the two factors for automatic adjustment during training. An evaluation of the proposed approach and the conventional AdaBoost on classroom datasets shows an improvement of 8% TPR (output result of low FRR) and 7% minimization of the FRR. The average learning speed of the proposed approach was improved with 1.19s execution time per image compared to 2.38s of the improved AdaBoost. Consequently, the proposed approach achieved 97% TPR with an overhead constraint time of 22.9s compared to 46.7s of the improved Adaboost when evaluated on images obtained from a large lecture hall (DK5) USM.Keywords: automatic attendance, face detection, haar-like cascade, manual attendance
Procedia PDF Downloads 71307 Comparison of Propofol versus Ketamine-Propofol Combination as an Anesthetic Agent in Supratentorial Tumors: A Randomized Controlled Study
Authors: Jakkireddy Sravani
Abstract:
Introduction: The maintenance of hemodynamic stability is of pivotal importance in supratentorial surgeries. Anesthesia for supratentorial tumors requires an understanding of localized or generalized rising ICP, regulation, and maintenance of intracerebral perfusion, and avoidance of secondary systemic ischemic insults. We aimed to compare the effects of the combination of ketamine and propofol with propofol alone when used as an induction and maintenance anesthetic agent during supratentorial tumors. Methodology: This prospective, randomized, double-blinded controlled study was conducted at AIIMS Raipur after obtaining the institute Ethics Committee approval (1212/IEC-AIIMSRPR/2022 dated 15/10/2022), CTRI/2023/01/049298 registration and written informed consent. Fifty-two supratentorial tumor patients posted for craniotomy and excision were included in the study. The patients were randomized into two groups. One group received a combination of ketamine and propofol, and the other group received propofol for induction and maintenance of anesthesia. Intraoperative hemodynamic stability and quality of brain relaxation were studied in both groups. Statistical analysis and technique: An MS Excel spreadsheet program was used to code and record the data. Data analysis was done using IBM Corp SPSS v23. The independent sample "t" test was applied for continuously dispersed data when two groups were compared, the chi-square test for categorical data, and the Wilcoxon test for not normally distributed data. Results: The patients were comparable in terms of demographic profile, duration of the surgery, and intraoperative input-output status. The trends in BIS over time were similar between the two groups (p-value = 1.00). Intraoperative hemodynamics (SBP, DBP, MAP) were better maintained in the ketamine and propofol combination group during induction and maintenance (p-value < 0.01). The quality of brain relaxation was comparable between the two groups (p-value = 0.364). Conclusion: Ketamine and propofol combination for the induction and maintenance of anesthesia was associated with superior hemodynamic stability, required fewer vasopressors during excision of supratentorial tumors, provided adequate brain relaxation, and some degree of neuroprotection compared to propofol alone.Keywords: supratentorial tumors, hemodynamic stability, brain relaxation, ketamine, propofol
Procedia PDF Downloads 25306 Real-Time Radiological Monitoring of the Atmosphere Using an Autonomous Aerosol Sampler
Authors: Miroslav Hyza, Petr Rulik, Vojtech Bednar, Jan Sury
Abstract:
An early and reliable detection of an increased radioactivity level in the atmosphere is one of the key aspects of atmospheric radiological monitoring. Although the standard laboratory procedures provide detection limits as low as few µBq/m³, their major drawback is the delayed result reporting: typically a few days. This issue is the main objective of the HAMRAD project, which gave rise to a prototype of an autonomous monitoring device. It is based on the idea of sequential aerosol sampling using a carrousel sample changer combined with a gamma-ray spectrometer. In our hardware configuration, the air is drawn through a filter positioned on the carrousel so that it could be rotated into the measuring position after a preset sampling interval. Filter analysis is performed via a 50% HPGe detector inside an 8.5cm lead shielding. The spectrometer output signal is then analyzed using DSP electronics and Gamwin software with preset nuclide libraries and other analysis parameters. After the counting, the filter is placed into a storage bin with a capacity of 250 filters so that the device can run autonomously for several months depending on the preset sampling frequency. The device is connected to a central server via GPRS/GSM where the user can view monitoring data including raw spectra and technological data describing the state of the device. All operating parameters can be remotely adjusted through a simple GUI. The flow rate is continuously adjustable up to 10 m³/h. The main challenge in spectrum analysis is the natural background subtraction. As detection limits are heavily influenced by the deposited activity of radon decay products and the measurement time is fixed, there must exist an optimal sample decay time (delayed spectrum acquisition). To solve this problem, we adopted a simple procedure based on sequential spectrum acquisition and optimal partial spectral sum with respect to the detection limits for a particular radionuclide. The prototyped device proved to be able to detect atmospheric contamination at the level of mBq/m³ per an 8h sampling.Keywords: aerosols, atmosphere, atmospheric radioactivity monitoring, autonomous sampler
Procedia PDF Downloads 148305 Study and Simulation of a Sever Dust Storm over West and South West of Iran
Authors: Saeed Farhadypour, Majid Azadi, Habibolla Sayyari, Mahmood Mosavi, Shahram Irani, Aliakbar Bidokhti, Omid Alizadeh Choobari, Ziba Hamidi
Abstract:
In the recent decades, frequencies of dust events have increased significantly in west and south west of Iran. First, a survey on the dust events during the period (1990-2013) is investigated using historical dust data collected at 6 weather stations scattered over west and south-west of Iran. After statistical analysis of the observational data, one of the most severe dust storm event that occurred in the region from 3rd to 6th July 2009, is selected and analyzed. WRF-Chem model is used to simulate the amount of PM10 and how to transport it to the areas. The initial and lateral boundary conditions for model obtained from GFS data with 0.5°×0.5° spatial resolution. In the simulation, two aerosol schemas (GOCART and MADE/SORGAM) with 3 options (chem_opt=106,300 and 303) were evaluated. Results of the statistical analysis of the historical data showed that south west of Iran has high frequency of dust events, so that Bushehr station has the highest frequency between stations and Urmia station has the lowest frequency. Also in the period of 1990 to 2013, the years 2009 and 1998 with the amounts of 3221 and 100 respectively had the highest and lowest dust events and according to the monthly variation, June and July had the highest frequency of dust events and December had the lowest frequency. Besides, model results showed that the MADE / SORGAM scheme has predicted values and trends of PM10 better than the other schemes and has showed the better performance in comparison with the observations. Finally, distribution of PM10 and the wind surface maps obtained from numerical modeling showed that the formation of dust plums formed in Iraq and Syria and also transportation of them to the West and Southwest of Iran. In addition, comparing the MODIS satellite image acquired on 4th July 2009 with model output at the same time showed the good ability of WRF-Chem in simulating spatial distribution of dust.Keywords: dust storm, MADE/SORGAM scheme, PM10, WRF-Chem
Procedia PDF Downloads 270304 Sustainable Integrated Waste Management System
Authors: Lidia Lombardi
Abstract:
Waste management in Europe and North America is evolving towards sustainable materials management, intended as a systemic approach to using and reusing materials more productively over their entire life cycles. Various waste management strategies are prioritized and ranked from the most to the least environmentally preferred, placing emphasis on reducing, reusing, and recycling as key to sustainable materials management. However, non-recyclable materials must also be appropriately addressed, and waste-to-energy (WtE) offers a solution to manage them, especially when a WtE plant is integrated within a complex system of waste and wastewater treatment plants and potential users of the output flows. To evaluate the environmental effects of such system integration, Life Cycle Assessment (LCA) is a helpful and powerful tool. LCA has been largely applied to the waste management sector, dating back to the late 1990s, producing a large number of theoretical studies and applications to the real world as support to waste management planning. However, LCA still has a fundamental role in helping the development of waste management systems supporting decisions. Thus, LCA was applied to evaluate the environmental performances of a Municipal Solid Waste (MSW) management system, with improved separate material collection and recycling and an integrated network of treatment plants including WtE, anaerobic digestion (AD) and also wastewater treatment plant (WWTP), for a reference study case area. The proposed system was compared to the actual situation, characterized by poor recycling, large landfilling and absence of WtE. The LCA results showed that the increased recycling significantly increases the environmental performances, but there is still room for improvement through the introduction of energy recovery (especially by WtE) and through its use within the system, for instance, by feeding the heat to the AD, to sludge recovery processes and supporting the water reuse practice. WtE offers a solution to manage non-recyclable MSW and allows saving important resources (such as landfill volumes and non-renewable energy), reducing the contribution to global warming, and providing an essential contribution to fulfill the goals of really sustainable waste management.Keywords: anaerobic digestion, life cycle assessment, waste-to-energy, municipal solid waste
Procedia PDF Downloads 60303 An Ecofriendly Approach for the Management of Aedes aegypti L (Diptera: Culicidae) by Ocimum sanctum
Authors: Mohd Shazad, Kamal Kumar Gupta
Abstract:
Aedes aegypti (Diptera: Culicidae), commonly known as tiger mosquito is the vector of dengue fever, yellow fever, chikungunya and zika virus. In the absence of any effective vaccine against these diseases, control the mosquito population is the only promising mean to prevent the diseases. Currently used chemical insecticides cause environmental contamination, high mammalian toxicity and hazards to non-target organisms, insecticide resistance and vector resurgence. Present research work aimed to explore the potentials of phytochemicals present in the Ocimum sanctum in management of mosquito population. The leaves of Ocimum were extracted with ethanol by ‘cold extraction method’. 0-24h old fourth instar larvae of Aedes aegypti were treated with the extract of concentrations 50ppm, 100ppm, 200ppm and 400ppm for 24h. Survival, growth and development of the treated larvae were evaluated. The adults emerged from the treated larvae were used for the reproductive fitness studies. Our results indicate 77.2% mortality in the larvae exposed to 400 ppm. At lower doses, although there was no significant reduction in the survival after 24h however, it decreased during subsequent days of observations. In control experiments, no mortality was observed. It was also observed that the larvae survived after treatment showed severe growth and developmental abnormalities. There was significant increase in larval duration. In control, fourth instar moulted into pupa after 3 days while larvae treated with 400 ppm extract were moulted after 4.6 days. Larva-pupa intermediates and the pupa-adult intermediates were observed in many cases. The adults emerged from the treated larvae showed impaired mating and oviposition behaviour. The females exhibited longer preoviposition period, reduced oviposition rate and decreased egg output. GCMS analysis of the ethanol extract revealed presence of JH mimics and intermediates of JH biosynthetic pathway. Potentials of Ocimum sanctum in integrated vector management programme of Aedes aegypti were discussed.Keywords: Aedes aegypti, Ocimum sanctum, oviposition, survival
Procedia PDF Downloads 183302 Concepts of Instrumentation Scheme for Thought Transfer
Authors: Rai Sachindra Prasad
Abstract:
Thought is physical force. This has been well recognized but hardly translated visually or otherwise in the sense of its transfer from one individual to another. In the present world of chaos and disorder with yawning gaps between right and wrong thinking individuals, if it is possible to transfer the right thoughts to replace the wrong ones it would indeed be a great achievement in the present situation of the world which is torn with violence with dangerous thoughts of individuals. Moreover, such a possibility would completely remove the barrier of language between two persons, which at times proves to be a great obstacle in realizing a desired purpose. If a proper instrumentation scheme containing appropriate transducers and electronics is designed and implemented to realize this thought ransfer phenomenon, this would prove to be extremely useful when properly used. Considering the advancements already made in recording the nerve impulses in the brain, which are electrical events of very short durations that move along the axon, it is conceivable that this may be used to good effect in implementing the scheme. In such a proposition one shoud consider the roles played by pineal body, pituitary gland and ‘association’ areas. Pioneer students of brain have thought that associations or connections between sensory input and motor output were made in these areas. It is currently believed that rather than being regions of simple sensory-motor connections, the association areas process and integrate sensory information relayed to them from the primary sensory areas of the cortex and from the thalamus, after the information has been processed, it may be sent to motor areas to be acted upon. Again, even though the role played by pineal body is not known fully to neurologists its interconnection with pituitary gland is a matter of great significance to the ‘Rishis’ and; Seers’ s described in Vedas and Puranas- the ancient Holy books of Hindus. If the pineal body is activated through meditation it would control the pituitary gland thereby the individual’s thoughts and acts. Thus, if thoughts can be picked up by special transducers, these can be connected to suitable electronics circuitry to amplify the signals. These signals in the form of electromagnetic waves can then be transmitted using modems for long distance transmission and eventually received by or passed on to a subject of interest through another set of electronics circuit and devices.Keywords: modems, pituitary gland, pineal body, thought transfer
Procedia PDF Downloads 372301 Cost-Effective and Optimal Control Analysis for Mitigation Strategy to Chocolate Spot Disease of Faba Bean
Authors: Haileyesus Tessema Alemneh, Abiyu Enyew Molla, Oluwole Daniel Makinde
Abstract:
Introduction: Faba bean is one of the most important grown plants worldwide for humans and animals. Several biotic and abiotic elements have limited the output of faba beans, irrespective of their diverse significance. Many faba bean pathogens have been reported so far, of which the most important yield-limiting disease is chocolate spot disease (Botrytis fabae). The dynamics of disease transmission and decision-making processes for intervention programs for disease control are now better understood through the use of mathematical modeling. Currently, a lot of mathematical modeling researchers are interested in plant disease modeling. Objective: In this paper, a deterministic mathematical model for chocolate spot disease (CSD) on faba bean plant with an optimal control model was developed and analyzed to examine the best strategy for controlling CSD. Methodology: Three control interventions, quarantine (u2), chemical control (u3), and prevention (u1), are employed that would establish the optimal control model. The optimality system, characterization of controls, the adjoint variables, and the Hamiltonian are all generated employing Pontryagin’s maximum principle. A cost-effective approach is chosen from a set of possible integrated strategies using the incremental cost-effectiveness ratio (ICER). The forward-backward sweep iterative approach is used to run numerical simulations. Results: The Hamiltonian, the optimality system, the characterization of the controls, and the adjoint variables were established. The numerical results demonstrate that each integrated strategy can reduce the diseases within the specified period. However, due to limited resources, an integrated strategy of prevention and uprooting was found to be the best cost-effective strategy to combat CSD. Conclusion: Therefore, attention should be given to the integrated cost-effective and environmentally eco-friendly strategy by stakeholders and policymakers to control CSD and disseminate the integrated intervention to the farmers in order to fight the spread of CSD in the Faba bean population and produce the expected yield from the field.Keywords: CSD, optimal control theory, Pontryagin’s maximum principle, numerical simulation, cost-effectiveness analysis
Procedia PDF Downloads 86300 Data and Model-based Metamodels for Prediction of Performance of Extended Hollo-Bolt Connections
Authors: M. Cabrera, W. Tizani, J. Ninic, F. Wang
Abstract:
Open section beam to concrete-filled tubular column structures has been increasingly utilized in construction over the past few decades due to their enhanced structural performance, as well as economic and architectural advantages. However, the use of this configuration in construction is limited due to the difficulties in connecting the structural members as there is no access to the inner part of the tube to install standard bolts. Blind-bolted systems are a relatively new approach to overcome this limitation as they only require access to one side of the tubular section to tighten the bolt. The performance of these connections in concrete-filled steel tubular sections remains uncharacterized due to the complex interactions between concrete, bolt, and steel section. Over the last years, research in structural performance has moved to a more sophisticated and efficient approach consisting of machine learning algorithms to generate metamodels. This method reduces the need for developing complex, and computationally expensive finite element models, optimizing the search for desirable design variables. Metamodels generated by a data fusion approach use numerical and experimental results by combining multiple models to capture the dependency between the simulation design variables and connection performance, learning the relations between different design parameters and predicting a given output. Fully characterizing this connection will transform high-rise and multistorey construction by means of the introduction of design guidance for moment-resisting blind-bolted connections, which is currently unavailable. This paper presents a review of the steps taken to develop metamodels generated by means of artificial neural network algorithms which predict the connection stress and stiffness based on the design parameters when using Extended Hollo-Bolt blind bolts. It also provides consideration of the failure modes and mechanisms that contribute to the deformability as well as the feasibility of achieving blind-bolted rigid connections when using the blind fastener.Keywords: blind-bolted connections, concrete-filled tubular structures, finite element analysis, metamodeling
Procedia PDF Downloads 158299 Signal Processing Techniques for Adaptive Beamforming with Robustness
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.Keywords: adaptive beamforming, robustness, signal blocking, steering angle error
Procedia PDF Downloads 123298 Performance and Specific Emissions of an SI Engine Using Anhydrous Ethanol–Gasoline Blends in the City of Bogota
Authors: Alexander García Mariaca, Rodrigo Morillo Castaño, Juan Rolón Ríos
Abstract:
The government of Colombia has promoted the use of biofuels in the last 20 years through laws and resolutions, which regulate their use, with the objective to improve the atmospheric air quality and to promote Colombian agricultural industry. However, despite the use of blends of biofuels with fossil fuels, the air quality in large cities does not get better, this deterioration in the air is mainly caused by mobile sources that working with spark ignition internal combustion engines (SI-ICE), operating with a mixture in volume of 90 % gasoline and 10 % ethanol called E10, that for the case of Bogota represent 84 % of the fleet. Another problem is that Colombia has big cities located above 2200 masl and there are no accurate studies on the impact that the E10 mixture could cause in the emissions and performance of SI-ICE. This study aims to establish the optimal blend between gasoline ethanol in which an SI engine operates more efficiently in urban centres located at 2600 masl. The test was developed on SI engine four-stroke, single cylinder, naturally aspirated and with carburettor for the fuel supply using blends of gasoline and anhydrous ethanol in different ratios E10, E15, E20, E40, E60, E85 and E100. These tests were conducted in the city of Bogota, which is located at 2600 masl, with the engine operating at 3600 rpm and at 25, 50, 75 and 100% of load. The results show that the performance variables as engine brake torque, brake power and brake thermal efficiency decrease, while brake specific fuel consumption increases with the rise in the percentage of ethanol in the mixture. On the other hand, the specific emissions of CO2 and NOx present increases while specific emissions of CO and HC decreases compared to those produced by gasoline. From the tests, it is concluded that the SI-ICE worked more efficiently with the E40 mixture, where was obtained an increases of the brake power of 8.81 % and a reduction on brake specific fuel consumption of 2.5 %, coupled with a reduction in the specific emissions of CO2, HC and CO in 9.72, 52.88 and 76.66 % respectively compared to the results obtained with the E10 blend. This behaviour is because the E40 mixture provides the appropriate amount of the oxygen for the combustion process, which leads to better utilization of available energy in this process, thus generating a comparable power output to the E10 mixing and producing lower emissions CO and HC with the other test blends. Nevertheless, the emission of NOx increases in 106.25 %.Keywords: emissions, ethanol, gasoline, engine, performance
Procedia PDF Downloads 323297 Focus Group Study Exploring Researchers Perspective on Open Science Policy
Authors: E. T. Svahn
Abstract:
Knowledge about the factors that influence the exchange between research and society is of the utmost importance for developing collaboration between different actors, especially in future science policy development and the creation of support structures for researchers. Among other things, how researchers look at the surrounding open science policy environment and what conditions and attitudes they have for interacting with it. This paper examines the Finnish researchers' attitudes towards open science policies in 2020. Open science is an integrated part of researchers' daily lives and supports not only the effectiveness of research outputs but also the quality of research. Open science policy in ideal situation is seen as a supporting structure that enables the exchange between research and society, but in other situation, it can end up being red tape generating obstacles and hindering possibilities of making science in an efficient way. Results of this study were carried out through focus group interviews. This qualitative research method was selected because it aims to understand the phenomenon under study. In addition, focus group interviews produce diverse and rich material that would not be available with other research methods. Focus group interviews have well-established applications in social science, especially in understanding the perspectives and experiences of research subjects. In this study, focus groups were used in studying the mindset and actions of researchers. Each group's size was between 4-10 people, and the aim was to bring out different perspectives on the subject. The interviewer enabled the presentation of different perceptions and opinions, and the focus group interviews were recorded and written as text. The material was analysed using grounded theory method. The results are presented as thematic areas, theoretical model, and as direct quotations. Attitudes towards open science policy can vary greatly depending on the research area. This study shows that the open science policy demands in medicine, technology, and natural sciences compared to social sciences, educational sciences, and the humanities, varies somewhat. The variation in attitudes between different research areas can thus be largely explained by the fact that the research output and ethical code vary significantly between certain subjects. This study aims to increase understanding of the nuances to what extent open science policies should be tailored for different disciplines and research areas.Keywords: focus group interview, grounded theory, open science policy, science policy
Procedia PDF Downloads 155296 Simon Says: What Should I Study?
Authors: Fonteyne Lot
Abstract:
SIMON (Study capacities and Interest Monitor is a freely accessible online self-assessment tool that allows secondary education pupils to evaluate their interests and capacities in order to choose a post-secondary major that maximally suits their potential. The tool consists of two broad domains that correspond with two general questions pupils ask: 'What study fields interest me?' and 'Am I capable to succeed in this field of study?'. The first question is addressed by a RIASEC-type interest inventory that links personal interests to post-secondary majors. Pupils are provided with a personal profile and an overview of majors with their degree of congruence. The output is dynamic: respondents can manipulate their score and they can compare their results to the profile of all fields of study. That way they are stimulated to explore the broad range of majors. To answer whether pupils are capable of succeeding in a preferred major, a battery of tests is provided. This battery comprises a range of factors that are predictive of academic success. Traditional predictors such as (educational) background and cognitive variables (mathematical and verbal skills) are included. Moreover, non-cognitive predictors of academic success (such as 'motivation', 'test anxiety', 'academic self-efficacy' and 'study skills') are assessed. These non-cognitive factors are generally not included in admission decisions although research shows they are incrementally predictive of success and are less discriminating. These tests inform pupils on potential causes of success and failure. More important, pupils receive their personal chances of success per major. These differential probabilities are validated through the underlying research on academic success of students. For example, the research has shown that we can identify 22 % of the failing students in psychology and educational sciences. In this group, our prediction is 95% accurate. SIMON leads more students to a suitable major which in turn alleviates student success and retention. Apart from these benefits, the instrument grants insight into risk factors of academic failure. It also supports and fosters the development of evidence-based remedial interventions and therefore gives way to a more efficient use of means.Keywords: academic success, online self-assessment, student retention, vocational choice
Procedia PDF Downloads 403295 Non-intrusive Hand Control of Drone Using an Inexpensive and Streamlined Convolutional Neural Network Approach
Authors: Evan Lowhorn, Rocio Alba-Flores
Abstract:
The purpose of this work is to develop a method for classifying hand signals and using the output in a drone control algorithm. To achieve this, methods based on Convolutional Neural Networks (CNN) were applied. CNN's are a subset of deep learning, which allows grid-like inputs to be processed and passed through a neural network to be trained for classification. This type of neural network allows for classification via imaging, which is less intrusive than previous methods using biosensors, such as EMG sensors. Classification CNN's operate purely from the pixel values in an image; therefore they can be used without additional exteroceptive sensors. A development bench was constructed using a desktop computer connected to a high-definition webcam mounted on a scissor arm. This allowed the camera to be pointed downwards at the desk to provide a constant solid background for the dataset and a clear detection area for the user. A MATLAB script was created to automate dataset image capture at the development bench and save the images to the desktop. This allowed the user to create their own dataset of 12,000 images within three hours. These images were evenly distributed among seven classes. The defined classes include forward, backward, left, right, idle, and land. The drone has a popular flip function which was also included as an additional class. To simplify control, the corresponding hand signals chosen were the numerical hand signs for one through five for movements, a fist for land, and the universal “ok” sign for the flip command. Transfer learning with PyTorch (Python) was performed using a pre-trained 18-layer residual learning network (ResNet-18) to retrain the network for custom classification. An algorithm was created to interpret the classification and send encoded messages to a Ryze Tello drone over its 2.4 GHz Wi-Fi connection. The drone’s movements were performed in half-meter distance increments at a constant speed. When combined with the drone control algorithm, the classification performed as desired with negligible latency when compared to the delay in the drone’s movement commands.Keywords: classification, computer vision, convolutional neural networks, drone control
Procedia PDF Downloads 210294 Income and Factor Analysis of Small Scale Broiler Production in Imo State, Nigeria
Authors: Ubon Asuquo Essien, Okwudili Bismark Ibeagwa, Daberechi Peace Ubabuko
Abstract:
The Broiler Poultry subsector is dominated by small scale production with low aggregate output. The high cost of inputs currently experienced in Nigeria tends to aggravate the situation; hence many broiler farmers struggle to break-even. This study was designed to examine income and input factors in small scale deep liter broiler production in Imo state, Nigeria. Specifically, the study examined; socio-economic characteristics of small scale deep liter broiler producing Poultry farmers; estimate cost and returns of broiler production in the area; analyze input factors in broiler production in the area and examined marketability, age and profitability of the enterprise. A multi-stage sampling technique was adopted in selecting 60 small scale broiler farmers who use deep liter system from 6 communities through the use of structured questionnaire. The socioeconomic characteristics of the broiler farmers and the profitability/ marketability age of the birds were described using descriptive statistical tools such as frequencies, means and percentages. Gross margin analysis was used to analyze the cost and returns to broiler production, while Cobb Douglas production function was employed to analyze input factors in broiler production. The result of the study revealed that the cost of feed (P<0.1), deep liter material (P<0.05) and medication (P<0.05) had a significant positive relationship with the gross return of broiler farmers in the study area, while cost of labour, fuel and day old chicks were not significant. Furthermore, Gross profit margin of the farmers who market their broiler at the 8th week of rearing was 80.7%; and 78.7% and 60.8% for farmers who market at the 10th week and 12th week of rearing, respectively. The business is, therefore, profitable but at varying degree. Government and Development partners should make deliberate efforts to curb the current rise in the prices of poultry feeds, drugs and timber materials used as bedding so as to widen the profit margin and encourage more farmers to go into the business. The farmers equally need more technical assistance from extension agents with regards to timely and profitable marketing.Keywords: broilers, factor analysis, income, small scale
Procedia PDF Downloads 80293 The Golden Bridge for Better Farmers Life
Authors: Giga Rahmah An-Nafisah, Lailatus Syifa Kamilah
Abstract:
Agriculture today, especially in Indonesia have globally improved. Since the election of the new president, who in the program of work priority the food self-sufficiency. Many ways and attempts have been planned carefully. All this is done to maximize agricultural production for the future. But if we look from another side, there is something missing. Yes! Improvement of life safety of the farmers, useless we fix all agricultural processing systems to maximize agricultural output, but the Hero of agriculture itself it does not change towards a better life. Yes, broker or middleman system agriculture results. Broker system or middleman this is the real problem facing farmers for their welfare. How come? As much as agriculture result, but if farmers were sell into middlemen with very low prices, then there will be no progress for their welfare. Broker system who do the actual middlemen should not happen in the current agricultural system, because the agriculture condition currently being concern, they would still be able to reap a profit as much as possible, no matter how miserable farmers manage the farm and currently face import competition this cannot be avoided anymore. This phenomenon is already visible plain sight all, who see it. Why? Because farmers those who fell victim cannot do anything to change this system. It is true, if only these middlemen who want to receive it for the sale of agricultural products, or arguably the only system that is the bridge realtor economic life of the farmers. The problem is that we should strive for the welfare of the heroes of our food. A golden bridge that could save them that, are the government. Why? Because the government can more easily with the powers to stop this broker system compared to other parties. The government supposed to be a bridge connecting the farmers with consumers or the people themselves. Yes, with improved broker system becomes: buy agricultural produce with highest prices to farmers and selling of agricultural products with lowest price to the consumer or the people themselves. And then the next question about the fate of middlemen? The system indirectly realtor is like system corruption. Why? Because the definition of corruption is an activity that is detrimental to the victim without being noticed by anyone continue to enrich himself and his victim's life miserable. Government may transfer performance of the middlemen into the idea of a new bridge that is done by the government itself. The government could lift them into this new bridge system employs them to remain a distributor of agricultural products themselves, but under the new policy made by the government to keep improving the welfare of farmers. This idea is made is not going to have much effect would improve the welfare of farmers, but most/least this idea will bring around many people for helping conscience farmers to the government, through the daily chatter, as well as celebrity gossip can quickly know too many people.Keywords: broker system, farmers live, government, agricultural economics
Procedia PDF Downloads 294292 Assessment of Climate Change Impacts on the Hydrology of Upper Guder Catchment, Upper Blue Nile
Authors: Fikru Fentaw Abera
Abstract:
Climate changes alter regional hydrologic conditions and results in a variety of impacts on water resource systems. Such hydrologic changes will affect almost every aspect of human well-being. The goal of this paper is to assess the impact of climate change on the hydrology of Upper Guder catchment located in northwest of Ethiopia. The GCM derived scenarios (HadCM3 A2a & B2a SRES emission scenarios) experiments were used for the climate projection. The statistical downscaling model (SDSM) was used to generate future possible local meteorological variables in the study area. The down-scaled data were then used as input to the soil and water assessment tool (SWAT) model to simulate the corresponding future stream flow regime in Upper Guder catchment of the Abay River Basin. A semi distributed hydrological model, SWAT was developed and Generalized Likelihood Uncertainty Estimation (GLUE) was utilized for uncertainty analysis. GLUE is linked with SWAT in the Calibration and Uncertainty Program known as SWAT-CUP. Three benchmark periods simulated for this study were 2020s, 2050s and 2080s. The time series generated by GCM of HadCM3 A2a and B2a and Statistical Downscaling Model (SDSM) indicate a significant increasing trend in maximum and minimum temperature values and a slight increasing trend in precipitation for both A2a and B2a emission scenarios in both Gedo and Tikur Inch stations for all three bench mark periods. The hydrologic impact analysis made with the downscaled temperature and precipitation time series as input to the hydrological model SWAT suggested for both A2a and B2a emission scenarios. The model output shows that there may be an annual increase in flow volume up to 35% for both emission scenarios in three benchmark periods in the future. All seasons show an increase in flow volume for both A2a and B2a emission scenarios for all time horizons. Potential evapotranspiration in the catchment also will increase annually on average 3-15% for the 2020s and 7-25% for the 2050s and 2080s for both A2a and B2a emissions scenarios.Keywords: climate change, Guder sub-basin, GCM, SDSM, SWAT, SWAT-CUP, GLUE
Procedia PDF Downloads 364291 PWM Harmonic Injection and Frequency-Modulated Triangular Carrier to Improve the Lives of the Transformers
Authors: Mario J. Meco-Gutierrez, Francisco Perez-Hidalgo, Juan R. Heredia-Larrubia, Antonio Ruiz-Gonzalez, Francisco Vargas-Merino
Abstract:
More and more applications power inverters connected to transformers, for example, the connection facilities to the power grid renewable generation. It is well known that the quality of signal power inverters it is not a pure sine. The harmonic content produced negative effects, one of which is the heating of electrical machines and therefore, affects the life of the machines. The decrease of life of transformers can be calculated by Arrhenius or Montsinger equation. Analyzing this expression any (long-term) decrease of a transformer temperature for 6º C - 7º C means doubles its life-expectancy. Methodologies: This work presents the technique of pulse width modulation (PWM) with an injection of harmonic and triangular frequency carrier modulated in frequency. This technique is used to improve the quality of the output voltage signal of the power inverters controlled PWM. The proposed technique increases in the fundamental term and a significant reduction in low order harmonics with the same commutations per time that control sine PWM. To achieve this, the modulating wave is compared to a triangular carrier with variable frequency over the period of the modulator. Therefore, it is, advantageous for the modulating signal to have a large amount of sinusoidal “information” in the areas of greater sampling. A triangular signal with a frequency that varies over the modulator’s period is used as a carrier, for obtaining more samples in the area with the greatest slope. A power inverter controlled by PWM proposed technique is connected to a transformer. Results: In order to verify the derived thermal parameters under different operation conditions, another ambient and loading scenario is involved for a further verification, which was sampled from the same power transformer. Temperatures of different parts of the transformer will be exposed for each PWM control technique analyzed. An assessment of the temperature be done with different techniques PWM control and hence the life of the transformer is calculated for each technique. Conclusion: This paper analyzes such as transformer heating produced by this technique and compared with other forms of PWM control. In it can be seen as a reduction the harmonic content produces less heat transformer and therefore, an increase in the life of the transformer.Keywords: heating, power-inverter, PWM, transformer
Procedia PDF Downloads 412290 The Physical and Physiological Profile of Professional Muay Thai Boxers
Authors: Lucy Horrobin, Rebecca Fores
Abstract:
Background: Muay Thai is an increasingly popular combat sport worldwide. Further academic research in the sport will contribute to its professional development. This research sought to produce normative data in relation to the physical and physiological characteristics of professional Muay Thai boxers, as, currently no such data exists. The ultimate aim being to inform appropriate training programs and to facilitate coaching. Methods: N = 9 professional, adult, male Muay Thai boxers were assessed for the following anthropometric, physical and physiological characteristics, using validated methods of assessment: body fat, hamstring flexibility, maximal dynamic upper body strength, lower limb peak power, upper body muscular endurance and aerobic capacity. Raw data scores were analysed for mean, range and SD and where applicable were expressed relative to body mass (BM). Results: Results showed similar characteristics to those found in other combat sports. Low percentages of body fat (mean±SD) 8.54 ± 1.16 allow for optimal power to weight ratios. Highly developed aerobic capacity (mean ±SD) 61.56 ± 5.13 ml.min.kg facilitate recovery and power maintenance throughout bouts. Lower limb peak power output values of (mean ± SD) 12.60 ± 2.09 W/kg indicate that Muay Thai boxers are amongst the most powerful of combat sport athletes. However, maximal dynamic upper body strength scores of (mean±SD) 1.14 kg/kg ± 0.18 were in only the 60th percentile of normative data for the general population and muscular endurance scores (mean±SD) 31.55 ± 11.95 and flexibility scores (mean±SD) 19.55 ± 11.89 cm expressed wide standard deviation. These results might suggest that these characteristics are insignificant in Muay Thai or under-developed, perhaps due to deficient training programs. Implications: This research provides the first normative data of physical and physiological characteristics of Muay Thai boxers. The findings of this study would aid trainers and coaches when designing effective evidence-based training programs. Furthermore, it provides a foundation for further research relating to physiology in Muay Thai. Areas of further study could be determining the physiological demands of a full rules bout and the effects of evidence-based training programs on performance.Keywords: fitness testing, Muay Thai, physiology, strength and conditioning
Procedia PDF Downloads 229289 Environmental Performance Improvement of Additive Manufacturing Processes with Part Quality Point of View
Authors: Mazyar Yosofi, Olivier Kerbrat, Pascal Mognol
Abstract:
Life cycle assessment of additive manufacturing processes has evolved significantly since these past years. A lot of existing studies mainly focused on energy consumption. Nowadays, new methodologies of life cycle inventory acquisition came through the literature and help manufacturers to take into account all the input and output flows during the manufacturing step of the life cycle of products. Indeed, the environmental analysis of the phenomena that occur during the manufacturing step of additive manufacturing processes is going to be well known. Now it becomes possible to count and measure accurately all the inventory data during the manufacturing step. Optimization of the environmental performances of processes can now be considered. Environmental performance improvement can be made by varying process parameters. However, a lot of these parameters (such as manufacturing speed, the power of the energy source, quantity of support materials) affect directly the mechanical properties, surface finish and the dimensional accuracy of a functional part. This study aims to improve the environmental performance of an additive manufacturing process without deterioration of the part quality. For that purpose, the authors have developed a generic method that has been applied on multiple parts made by additive manufacturing processes. First, a complete analysis of the process parameters is made in order to identify which parameters affect only the environmental performances of the process. Then, multiple parts are manufactured by varying the identified parameters. The aim of the second step is to find the optimum value of the parameters that decrease significantly the environmental impact of the process and keep the part quality as desired. Finally, a comparison between the part made by initials parameters and changed parameters is made. In this study, the major finding claims by authors is to reduce the environmental impact of an additive manufacturing process while respecting the three quality criterion of parts, mechanical properties, dimensional accuracy and surface roughness. Now that additive manufacturing processes can be seen as mature from a technical point of view, environmental improvement of these processes can be considered while respecting the part properties. The first part of this study presents the methodology applied to multiple academic parts. Then, the validity of the methodology is demonstrated on functional parts.Keywords: additive manufacturing, environmental impact, environmental improvement, mechanical properties
Procedia PDF Downloads 288288 The Connection between the Schwartz Theory of Basic Values and Ethical Principles in Clinical Psychology
Authors: Matej Stritesky
Abstract:
The research deals with the connection between the Schwartz Theory of Basic Values and the ethical principles in psychology, on which the meta-code of ethics the European Federation of Psychological Associations is based. The research focuses on ethically problematic situations in clinical psychology in the Czech Republic. Based on the analysis of papers that identified ethically problematic situations faced by clinical psychologists, a questionnaire of ethically problematic situations in clinical psychology (EPSCP) was created for the purposes of the research. The questionnaire was created to represent situations that correspond to the 4 principles on which the meta-code of ethics the European Federation of Psychological Associations is based. The questionnaire EPSCP consists of descriptions of 32 situations that respondents evaluate on a scale from 1 (psychologist's behaviour is ethically perfectly fine) to 10 (psychologist's behaviour is ethically completely unacceptable). The EPSCP questionnaire, together with Schwartz's PVQ questionnaire, will be presented to 60 psychology students. The relationship between principles in clinical psychology and the values on Schwartz´s value continuum will be described using multidimensional scaling. A positive correlation is assumed between the higher-order value of openness to change and problematic ethical situations related to the principle of integrity; a positive correlation between the value of the higher order of self-transcendence and the principle of respect and responsibility; a positive correlation between the value of the higher order of conservation and the principle of competence; and negative correlation between the value of the higher order of ego strengthening and sensitivity to ethically problematic situations. The research also includes an experimental part. The first half of the students are presented with the code of ethics of the Czech Association of Clinical Psychologists before completing the questionnaires, and to the second half of the students is the code of ethics presented after completing the questionnaires. In addition to reading the code of ethics, students describe the three rules of the code of ethics that they consider most important and state why they chose these rules. The output of the experimental part will be to determine whether the presentation of the code of ethics leads to greater sensitivity to ethically problematic situations.Keywords: clinical psychology, ethically problematic situations in clinical psychology, ethical principles in psychology, Schwartz theory of basic values
Procedia PDF Downloads 112287 High-Throughput Artificial Guide RNA Sequence Design for Type I, II and III CRISPR/Cas-Mediated Genome Editing
Authors: Farahnaz Sadat Golestan Hashemi, Mohd Razi Ismail, Mohd Y. Rafii
Abstract:
A huge revolution has emerged in genome engineering by the discovery of CRISPR (clustered regularly interspaced palindromic repeats) and CRISPR-associated system genes (Cas) in bacteria. The function of type II Streptococcus pyogenes (Sp) CRISPR/Cas9 system has been confirmed in various species. Other S. thermophilus (St) CRISPR-Cas systems, CRISPR1-Cas and CRISPR3-Cas, have been also reported for preventing phage infection. The CRISPR1-Cas system interferes by cleaving foreign dsDNA entering the cell in a length-specific and orientation-dependant manner. The S. thermophilus CRISPR3-Cas system also acts by cleaving phage dsDNA genomes at the same specific position inside the targeted protospacer as observed in the CRISPR1-Cas system. It is worth mentioning, for the effective DNA cleavage activity, RNA-guided Cas9 orthologs require their own specific PAM (protospacer adjacent motif) sequences. Activity levels are based on the sequence of the protospacer and specific combinations of favorable PAM bases. Therefore, based on the specific length and sequence of PAM followed by a constant length of target site for the three orthogonals of Cas9 protein, a well-organized procedure will be required for high-throughput and accurate mining of possible target sites in a large genomic dataset. Consequently, we created a reliable procedure to explore potential gRNA sequences for type I (Streptococcus thermophiles), II (Streptococcus pyogenes), and III (Streptococcus thermophiles) CRISPR/Cas systems. To mine CRISPR target sites, four different searching modes of sgRNA binding to target DNA strand were applied. These searching modes are as follows: i) coding strand searching, ii) anti-coding strand searching, iii) both strand searching, and iv) paired-gRNA searching. The output of such procedure highlights the power of comparative genome mining for different CRISPR/Cas systems. This could yield a repertoire of Cas9 variants with expanded capabilities of gRNA design, and will pave the way for further advance genome and epigenome engineering.Keywords: CRISPR/Cas systems, gRNA mining, Streptococcus pyogenes, Streptococcus thermophiles
Procedia PDF Downloads 257286 Measuring the Resilience of e-Governments Using an Ontology
Authors: Onyekachi Onwudike, Russell Lock, Iain Phillips
Abstract:
The variability that exists across governments, her departments and the provisioning of services has been areas of concern in the E-Government domain. There is a need for reuse and integration across government departments which are accompanied by varying degrees of risks and threats. There is also the need for assessment, prevention, preparation, response and recovery when dealing with these risks or threats. The ability of a government to cope with the emerging changes that occur within it is known as resilience. In order to forge ahead with concerted efforts to manage reuse and integration induced risks or threats to governments, the ambiguities contained within resilience must be addressed. Enhancing resilience in the E-Government domain is synonymous with reducing risks governments face with provisioning of services as well as reuse of components across departments. Therefore, it can be said that resilience is responsible for the reduction in government’s vulnerability to changes. In this paper, we present the use of the ontology to measure the resilience of governments. This ontology is made up of a well-defined construct for the taxonomy of resilience. A specific class known as ‘Resilience Requirements’ is added to the ontology. This class embraces the concept of resilience into the E-Government domain ontology. Considering that the E-Government domain is a highly complex one made up of different departments offering different services, the reliability and resilience of the E-Government domain have become more complex and critical to understand. We present questions that can help a government access how prepared they are in the face of risks and what steps can be taken to recover from them. These questions can be asked with the use of queries. The ontology focuses on developing a case study section that is used to explore ways in which government departments can become resilient to the different kinds of risks and threats they may face. A collection of resilience tools and resources have been developed in our ontology to encourage governments to take steps to prepare for emergencies and risks that a government may face with the integration of departments and reuse of components across government departments. To achieve this, the ontology has been extended by rules. We present two tools for understanding resilience in the E-Government domain as a risk analysis target and the output of these tools when applied to resilience in the E-Government domain. We introduce the classification of resilience using the defined taxonomy and modelling of existent relationships based on the defined taxonomy. The ontology is constructed on formal theory and it provides a semantic reference framework for the concept of resilience. Key terms which fall under the purview of resilience with respect to E-Governments are defined. Terms are made explicit and the relationships that exist between risks and resilience are made explicit. The overall aim of the ontology is to use it within standards that would be followed by all governments for government-based resilience measures.Keywords: E-Government, Ontology, Relationships, Resilience, Risks, Threats
Procedia PDF Downloads 337285 Conflation Methodology Applied to Flood Recovery
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.Keywords: community resilience, conflation, flood risk, nuisance flooding
Procedia PDF Downloads 103284 Construction and Demolition Waste Management in Indian Cities
Authors: Vaibhav Rathi, Soumen Maity, Achu R. Sekhar, Abhijit Banerjee
Abstract:
Construction sector in India is extremely resource and carbon intensive. It contributes to significantly to national greenhouse emissions. At the resource end the industry consumes significant portions of the output from mining. Resources such as sand and soil are most exploited and their rampant extraction is becoming constant source of impact on environment and society. Cement is another resource that is used in abundance in building and construction and has a direct impact on limestone resources. Though India is rich in cement grade limestone resource, efforts have to be made for sustainable consumption of this resource to ensure future availability. Use of these resources in high volumes in India is a result of rapid urbanization. More cities have grown to a population of million plus in the last decade and million plus cities are growing further. To cater to needs of growing urban population of construction activities are inevitable in the coming future thereby increasing material consumption. Increased construction will also lead to substantial increase in end of life waste generation from Construction and Demolition (C&D). Therefore proper management of C&D waste has the potential to reduce environmental pollution as well as contribute to the resource efficiency in the construction sector. The present study deals with estimation, characterisation and documenting current management practices of C&D waste in 10 Indian cities of different geographies and classes. Based on primary data the study draws conclusions on the potential of C&D waste to be used as an alternative to primary raw materials. The estimation results show that India generates 716 million tons of C&D waste annually, placing the country as second largest C&D waste generator in the world after China. The study also aimed at utilization of C&D waste in to building materials. The waste samples collected from various cities have been used to replace 100% stone aggregates in paver blocks without any decrease in strength. However, management practices of C&D waste in cities still remains poor instead of notification of rules and regulations notified for C&D waste management. Only a few cities have managed to install processing plant and set up management systems for C&D waste. Therefore there is immense opportunity for management and reuse of C&D waste in Indian cities.Keywords: building materials, construction and demolition waste, cities, environmental pollution, resource efficiency
Procedia PDF Downloads 304283 Public Health Emergency Management (PHEM) to COVID-19 Pandemic in North-Eastern Part of Thailand
Authors: Orathai Srithongtham, Ploypailin Mekathepakorn, Tossaphong Buraman, Pontida Moonpradap, Rungrueng Kitpati, Chulapon Kratet, Worayuth Nak-ai, Suwaree Charoenmukkayanan, Peeranuch Keawkanya
Abstract:
The COVID-19 pandemic was effect to the health security of the Thai people. The PHEM principle was essential to the surveillance, prevention, and control of COVID-19. This study aimed to present the process of prevention and control of COVID-19 from February 29, 2021- April 30, 2022, and the factors and conditions influent the successful outcome. The study areas were three provinces. The target group was 37 people, composed of public health personnel. The data was collected in-depth, and group interviews followed the non-structure interview guide and were analyzed by content analysis. The components of COVID-19 prevention and control were found in the process of PHEM as follows; 1) Emergency Operation Center (EOC) with an incidence command system (ICS) from the district to provincial level and to propose the provincial measure, 2) Provincial Communicable Disease Committee (PCDC) to decide the provincial measure 3) The measure for surveillance, prevention, control, and treatment of COVID-19, and 4) outcomes and best practices for surveillance and control of COVID-19. The success factors of 4S and EC were as follows; Space: prepare the quarantine (HQ, LQ), Cohort Ward (CW), field hospital, and community isolation and home isolation to face with the patient and risky group, Staff network from various organization and group cover the community leader and Health Volunteer (HV), Stuff the management and sharing of the medical and non-medical equipment, System of Covid-19 respond were EOC, ICS, Joint Investigation Team (JIT) and Communicable Disease Control Unit (CDCU) for monitoring the real-time of surveillance and control of COVID-19 output, Environment management in hospital and the community with Infections Control (IC) principle, and Culture in term of social capital on “the relationship of Isan people” supported the patient provide the good care and support. The structure of PHEM, Isan’s Culture, and good preparation was a significant factor in the three provinces.Keywords: public health, emergency management, covid-19, pandemic
Procedia PDF Downloads 81282 Electromagnetic Energy Harvesting by Using a Rectenna with a Metamaterial Lens
Authors: Ursula D. C. Resende, Fabiano S. Bicalho, Sandro T. M. Gonçalves
Abstract:
The growing demand for cheap and clean energy sources have been motivated by the study and development of distinct technologies and devices able to provide different amounts of energy. In order to supply energy for small loads, the energy from the electromagnetic spectrum can be harvested. This possibility is particularly interesting because this kind of energy is constantly available in the environment and the number of radiofrequency sources is permanently increasing, due to advances in telecommunications services. A rectenna, which is a combination of an antenna and a rectifier circuit, is an equipment that can efficiently perform the electromagnetic energy harvesting. However, since the amount of electromagnetic energy available in the environment is very small, limited values of power can be harvested by the rectenna. Therefore, several technical strategies have been investigated in order to increase this amount of power. In this work, a metamaterial electromagnetic lens is used to improve the electromagnetic energy harvesting. The rectenna investigated was designed and optimized to charge a Li-Ion battery using the electromagnetic energy from an internet Wi-Fi commercial router model TL-WR841HP operating in 2.45 GHz with maximal output power equal to 18 dBm. The rectenna consists of a high directive antenna, a double voltage rectifier circuit and a metamaterial lens. The printed antenna, constituted of two rectangular radiator elements, was projected and optimized by using the Computer Simulation Software (CST) in order to obtain high directivities and values of S11 parameter below -10 dB in 2.45 GHz. The antenna was printed over a double-sided copper fiberglass substrate, FR4, with characterized relative electric permittivity εr = 4.3 and tangent of losses δ = 0.01. The rectifier circuit, which incorporates a circuit for impedance matching and uses the Schottky diode HSMS-2852, was projected and optimized by using Advanced Design Software (ADS) and built over the same FR4 substrate. The metamaterial cell is composed of two Square Split Ring Resonator (S-SRR) and a thin wire in order to operate with negative values of εr and relative magnetic permeability in 2.45 GHz. In order to evaluate the performance of the purposed rectenna two experimental charging tests were performed, one without and other with the metamaterial lens. The result obtained demonstrate that the electromagnetic lens was able to significantly increase the levels of electric current delivered to the battery, approximately 44%.Keywords: electromagnetic energy harvesting, electromagnetic lens, metamaterial, rectenna
Procedia PDF Downloads 143281 Alteration Quartz-Kfeldspar-Apatite-Molybdenite at B Anomaly Prospection with Artificial Neural Network to Determining Molydenite Economic Deposits in Malala District, Western Sulawesi
Authors: Ahmad Lutfi, Nikolas Dhega
Abstract:
The Malala deposit in northwest Sulawesi is the only known porphyry molybdenum and the only source for rhenium, occurrence in Indonesia. The neural network method produces results that correspond very closely to those of the knowledge-based fuzzy logic method and weights of evidence method. This method required data of solid geology, regional faults, airborne magnetic, gamma-ray survey data and GIS data. This interpretation of the network output fits with the intuitive notion that a prospective area has characteristics that closely resemble areas known to contain mineral deposits. Contrasts with the weights of evidence and fuzzy logic methods, where, for a given grid location, each input-parameter value automatically results in an increase in the prospective estimated. Malala District indicated molybdenum anomalies in stream sediments from in excess of 15 km2 were obtained, including the Takudan Fault as most prominent structure with striking 40̊ to 60̊ over a distance of about 30 km and in most places weakly at anomaly B, developed over an area of 4 km2, with a ‘shell’ up to 50 m thick at the intrusive contact with minor mineralization occurring in the Tinombo Formation. Series of NW trending, steeply dipping fracture zones, named the East Zone has an estimated resource of 100 Mt at 0.14% MoS2 and minimum target of 150 Mt 0.25%. The Malala porphyries occur as stocks and dykes with predominantly granitic, with fluorine-poor class of molybdenum deposits and belongs to the plutonic sub-type. Unidirectional solidification textures consisting of subparallel, crenulated layers of quartz that area separated by layers of intrusive material textures. The deuteric nature of the molybdenum mineralization and the dominance of carbonate alteration.The nature of the Stage I with alteration barren quartz K‐feldspar; and Stage II with alteration quartz‐K‐feldspar‐apatite-molybdenite veins combined with the presence of disseminated molybdenite with primary biotite in the host intrusive.Keywords: molybdenite, Malala, porphyries, anomaly B
Procedia PDF Downloads 153