Search results for: possibility uncertainty
956 Developing Improvements to Multi-Hazard Risk Assessments
Authors: A. Fathianpour, M. B. Jelodar, S. Wilkinson
Abstract:
This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.Keywords: cascading hazards, disaster assessment, mullti-hazards, risk assessment
Procedia PDF Downloads 113955 Assessment of Arterial Stiffness through Measurement of Magnetic Flux Disturbance and Electrocardiogram Signal
Authors: Jing Niu, Jun X. Wang
Abstract:
Arterial stiffness predicts mortality and morbidity, independently of other cardiovascular risk factors. And it is a major risk factor for age-related morbidity and mortality. The non-invasive industry gold standard measurement system of arterial stiffness utilizes pulse wave velocity method. However, the desktop device is expensive and requires trained professional to operate. The main objective of this research is the proof of concept of the proposed non-invasive method which uses measurement of magnetic flux disturbance and electrocardiogram (ECG) signal for measuring arterial stiffness. The method could enable accurate and easy self-assessment of arterial stiffness at home, and to help doctors in research, diagnostic and prescription in hospitals and clinics. A platform for assessing arterial stiffness through acquisition and analysis of radial artery pulse waveform and ECG signal has been developed based on the proposed method. Radial artery pulse waveform is acquired using the magnetic based sensing technology, while ECG signal is acquired using two dry contact single arm ECG electrodes. The measurement only requires the participant to wear a wrist strap and an arm band. Participants were recruited for data collection using both the developed platform and the industry gold standard system. The results from both systems underwent correlation assessment analysis. A strong positive correlation between the results of the two systems is observed. This study presents the possibility of developing an accurate, easy to use and affordable measurement device for arterial stiffness assessment.Keywords: arterial stiffness, electrocardiogram, pulse wave velocity, Magnetic Flux Disturbance
Procedia PDF Downloads 188954 Mobile Network Users Amidst Ultra-Dense Networks in 5G Using an Improved Coordinated Multipoint (CoMP) Technology
Authors: Johnson O. Adeogo, Ayodele S. Oluwole, O. Akinsanmi, Olawale J. Olaluyi
Abstract:
In this 5G network, very high traffic density in densely populated areas, most especially in densely populated areas, is one of the key requirements. Radiation reduction becomes one of the major concerns to secure the future life of mobile network users in ultra-dense network areas using an improved coordinated multipoint technology. Coordinated Multi-Point (CoMP) is based on transmission and/or reception at multiple separated points with improved coordination among them to actively manage the interference for the users. Small cells have two major objectives: one, they provide good coverage and/or performance. Network users can maintain a good quality signal network by directly connecting to the cell. Two is using CoMP, which involves the use of multiple base stations (MBS) to cooperate by transmitting and/or receiving at the same time in order to reduce the possibility of electromagnetic radiation increase. Therefore, the influence of the screen guard with rubber condom on the mobile transceivers as one major piece of equipment radiating electromagnetic radiation was investigated by mobile network users amidst ultra-dense networks in 5g. The results were compared with the same mobile transceivers without screen guards and rubber condoms under the same network conditions. The 5 cm distance from the mobile transceivers was measured with the help of a ruler, and the intensity of Radio Frequency (RF) radiation was measured using an RF meter. The results show that the intensity of radiation from various mobile transceivers without screen guides and condoms was higher than the mobile transceivers with screen guides and condoms when call conversation was on at both ends.Keywords: ultra-dense networks, mobile network users, 5g, coordinated multi-point.
Procedia PDF Downloads 107953 Virtue, Truth, Freedom, And The History Of Philosophy
Authors: Ashley DelCorno
Abstract:
GEM Anscombe’s 1958 essay Modern Moral Philosophy and the tradition of virtue ethics that followed has given rise to the restoration (or, more plainly, the resurrection) of Aristotle as something of an authority figure. Alisdair MacIntyre and Martha Nussbaum are proponents, for example, not just of Aristotle’s relevancy but also of his apparent implicit authority. That said, it’s not clear that the schema imagined by virtue ethicists accurately describes moral life or that it does not inadvertently work to impoverish genuine decision-making. If the label ‘virtue’ is categorically denied to some groups (while arbitrarily afforded to others), it can only turn on itself, thus rendering ridiculous its own premise. Likewise, as an inescapable feature of virtue ethics, Aristotelean binaries like ‘virtue/vice’ and ‘voluntary/involuntary’ offer up false dichotomies that may seriously compromise an agent’s ability to conceptualize choices that are truly free and rooted in meaningful criteria. Here, this topic is analyzed through a feminist lens predicated on the known paradoxes of patriarchy. The work of feminist theorists Jacqui Alexander, Katharine Angel, Simone de Beauvoir, bell hooks, Audre Lorde, Imani Perry, and Amia Srinivasan serves as important guideposts, and the argument here is built from a key tenet of black feminist thought regarding scarcity and possibility. Above all, it’s clear that though the philosophical tradition of virtue ethics presents itself as recovering the place of agency in ethics, its premises possess crippling limitations toward the achievement of this goal. These include, most notably, virtue ethics’ binding analysis of history, as well as its axiomatic attachment to obligatory clauses, problematic reading-in of Aristotle and arbitrary commitment to predetermined and competitively patriarchal ideas of what counts as a virtue.Keywords: feminist history, the limits of utopic imagination, curatorial creation, truth, virtue, freedom
Procedia PDF Downloads 83952 Reduction of Plutonium Production in Heavy Water Research Reactor: A Feasibility Study through Neutronic Analysis Using MCNPX2.6 and CINDER90 Codes
Authors: H. Shamoradifar, B. Teimuri, P. Parvaresh, S. Mohammadi
Abstract:
One of the main characteristics of Heavy Water Moderated Reactors is their high production of plutonium. This article demonstrates the possibility of reduction of plutonium and other actinides in Heavy Water Research Reactor. Among the many ways for reducing plutonium production in a heavy water reactor, in this research, changing the fuel from natural Uranium fuel to Thorium-Uranium mixed fuel was focused. The main fissile nucleus in Thorium-Uranium fuels is U-233 which would be produced after neutron absorption by Th-232, so the Thorium-Uranium fuels have some known advantages compared to the Uranium fuels. Due to this fact, four Thorium-Uranium fuels with different compositions ratios were chosen in our simulations; a) 10% UO2-90% THO2 (enriched= 20%); b) 15% UO2-85% THO2 (enriched= 10%); c) 30% UO2-70% THO2 (enriched= 5%); d) 35% UO2-65% THO2 (enriched= 3.7%). The natural Uranium Oxide (UO2) is considered as the reference fuel, in other words all of the calculated data are compared with the related data from Uranium fuel. Neutronic parameters were calculated and used as the comparison parameters. All calculations were performed by Monte Carol (MCNPX2.6) steady state reaction rate calculation linked to a deterministic depletion calculation (CINDER90). The obtained computational data showed that Thorium-Uranium fuels with four different fissile compositions ratios can satisfy the safety and operating requirements for Heavy Water Research Reactor. Furthermore, Thorium-Uranium fuels have a very good proliferation resistance and consume less fissile material than uranium fuels at the same reactor operation time. Using mixed Thorium-Uranium fuels reduced the long-lived α emitter, high radiotoxic wastes and the radio toxicity level of spent fuel.Keywords: Heavy Water Reactor, Burn up, Minor Actinides, Neutronic Calculation
Procedia PDF Downloads 246951 A Patient Passport Application for Adults with Cystic Fibrosis
Authors: Tamara Vagg, Cathy Shortt, Claire Hickey, Joseph A. Eustace, Barry J. Plant, Sabin Tabirca
Abstract:
Introduction: Paper-based patient passports have been used advantageously for older patients, patients with diabetes, and patients with learning difficulties. However, these passports can experience issues with data security, patients forgetting to bring the passport, patients being over encumbered, and uncertainty with who is responsible for entering and managing data in this passport. These issues could be resolved by transferring the paper-based system to a convenient platform such as a smartphone application (app). Background: Life expectancy for some Cystic Fibrosis (CF) patients are rising and as such new complications and procedures are predicted. Subsequently, there is a need for education and management interventions that can benefit CF adults. This research proposes a CF patient passport to record basic medical information through a smartphone app which will allow CF adults access to their basic medical information. Aim: To provide CF patients with their basic medical information via mobile multimedia so that they can receive care when traveling abroad or between CF centres. Moreover, by recording their basic medical information, CF patients may become more aware of their own condition and more active in their health care. Methods: This app is designed by a CF multidisciplinary team to be a lightweight reflection of a hospital patient file. The passport app is created using PhoneGap so that it can be deployed for both Android and iOS devices. Data entered into the app is encrypted and stored locally only. The app is password protected and includes the ability to set reminders and a graph to visualise weight and lung function over time. The app is introduced to seven participants as part of a stress test. The participants are asked to test the performance and usability of the app and report any issues identified. Results: Feedback and suggestions received via this testing include the ability to reorder the list of clinical appointments via date, an open format of recording dates (in the event specifics are unknown), and a drop down menu for data which is difficult to enter (such as bugs found in mucus). The app is found to be usable and accessible and is now being prepared for a pilot study with adult CF patients. Conclusions: It is anticipated that such an app will be beneficial to CF adult patients when travelling abroad and between CF centres.Keywords: Cystic Fibrosis, digital patient passport, mHealth, self management
Procedia PDF Downloads 254950 Informal Economy: Case Study of Street Vendors in Bangkok
Authors: Kangrij Roeksiripat
Abstract:
Street vending is one of the informal economy activities which considered significance to Thai people in the economic and the day-to-day social life. It had been believed that the street vendor is a group of the poor and uneducated people. With the increasing numbers of the street vendor occupying space on public sidewalks especially in central business districts, it becomes unclear whether street vending continues as a solution to unemployment for access labors. This research attempts to study and analyze types of street vendors in Bangkok under the informal economy framework. The debate on the heterogeneous informal economy has categorized into four schools; the dualism, the structuralism, the legalism and the voluntarism. The examination also embodies with market concept with Porter’s Five Forces of Competitive Position Model analysis and the interviews with the street vendors in three case study areas: Inner zone (Pathumwan district - the sidewalk on the opposite side of Siam Paragon mall), Middle zone (Ramkhamhaeng district - the sidewalk on the opposite side of Ramkhamhaeng University) and Outer zone (Minburi district- the sidewalk of Sriburanukit Road). The result indicates that most of street vendors in Siam square are voluntarily choose to make a living in vending on a sidewalk and tend to take it as a long-term occupation even though they can be in formal wage employment. Moreover, average income and positive attitude towards self-employed are the important factors that drive them to operate street vending businesses. Meanwhile, street vending is often a family enterprise in Ramkhamhaeng area and most vendors do not wish to transform their businesses into the formal sectors. Whereas the survey conducted in Sriburankit Road reveals that almost all of street vendors migrated from other provinces and were previously paid as the unskilled workers in formal sectors. They moved to informal trades because of the uncertainty of employment in the mainstream sectors and the inconsistent income with knowledge support of friends and relatives from the same hometown. In particular, the result reveals a common pattern that street vending is the very first occupation of some group of vendors and they will continue to engage in this activity. Thus, it is important for the government to design optimal policy which not only integrating informal workers into the formal economy but also monitoring the enforcement of regulations on the modern informal economy.Keywords: informal economy, sidewalks, street vendors, occupation
Procedia PDF Downloads 286949 Application of Remote Sensing and In-Situ Measurements for Discharge Monitoring in Large Rivers: Case of Pool Malebo in the Congo River Basin
Authors: Kechnit Djamel, Ammarri Abdelhadi, Raphael Tshimang, Mark Trrig
Abstract:
One of the most important aspects of monitoring rivers is navigation. The variation of discharge in the river generally produces a change in available draft for a vessel, particularly in the low flow season, which can impact the navigable water path, especially when the water depth is less than the normal one, which allows safe navigation for boats. The water depth is related to the bathymetry of the channel as well as the discharge. For a seasonal update of the navigation maps, a daily discharge value is required. Many novel approaches based on earth observation and remote sensing have been investigated for large rivers. However, it should be noted that most of these approaches are not currently able to directly estimate river discharge. This paper discusses the application of remote sensing tools using the analysis of the reflectance value of MODIS imagery and is combined with field measurements for the estimation of discharge. This approach is applied in the lower reach of the Congo River (Pool Malebo) for the period between 2019 and 2021. The correlation obtained between the observed discharge observed in the gauging station and the reflectance ratio time series is 0.81. In this context, a Discharge Reflectance Model (DRM) was developed to express discharge as a function of reflectance. This model introduces a non-contact method that allows discharge monitoring using earth observation. DRM was validated by field measurements using ADCP, in different sections on the Pool Malebo, over two different periods (dry and wet seasons), as well as by the observed discharge in the gauging station. The observed error between the estimated and measured discharge values ranges from 1 to 8% for the ADCP and from (1% to 11%) for the gauging station. The study of the uncertainties will give us the possibility to judge the robustness of the DRM.Keywords: discharge monitoring, navigation, MODIS, empiric, ADCP, Congo River
Procedia PDF Downloads 92948 Attempts for the Synthesis of Indol-Ring Fluorinated Tryptophan Derivatives to Enhance the Activity of Antimicrobial Peptides
Authors: Anita K. Kovacs, Peter Hegyes, Zsolt Bozso, Gabor Toth
Abstract:
Fluorination has been used extensively by the pharmaceutical industry as a strategy to improve the pharmacokinetics of drugs due to its effectiveness in increasing the potency of antimicrobial peptides (AMPs). Multiple-fluorinated indole-ring-containing tryptophan derivatives have the potential of having better antimicrobial activity than the widely used mono-fluorinated indole-ring containing tryptophan derivatives, but they are not available commercially. Therefore, our goal is to synthesize multiple-fluorinated indole-ring containing tryptophan derivatives to incorporate them into AMPs to enhance their antimicrobial activity. During our work, we are trying several methods (classical organic synthesis, enzymic synthesis, and solid phase peptide synthesis) for the synthesis of the said compounds, with mixed results. With classical organic synthesis (four different routes), we did not get the desired results. The reaction of serin with substituted indole in the presence of acetic anhydride led to racemic tryptophane; with the reaction of protected serin with indole in the presence of nickel complex was unsuccessful; the reaction of serin containing protected dipeptide with disuccinimidyl carbonate we achieved a tryptophane containing dipeptide, its chiral purity is being examined; the reaction of alcohol with substituted indole in the presence of copper complex was successful, but it was only a test reaction, we could not reproduce the same result with serine. The undergoing tryptophan-synthase method has shown some potential, but our work has not been finished yet. The successful synthesis of the desired multiple-fluorinated indole-ring-containing tryptophan will be followed by solid phase peptide synthesis in order to incorporate it into AMPs to enhance their antimicrobial activity. The successful completion of these phases will mean the possibility of manufacturing new, effective AMPs.Keywords: halogenation, fluorination, tryptophan, enhancement of antimicrobial activity
Procedia PDF Downloads 98947 Silent Culminations in Operas Aida and Mazeppa
Authors: Stacy Jarvis
Abstract:
A silent culmination is a musical technique that creates or increases tension in a piece of music. It is a type of cadence in which the music gradually builds to a climax but suddenly stops without any resolution. This technique can create suspense and anticipation in the listener as they wait to find out what will happen next. It can also draw attention to a particular element of the music, such as a particular instrument or vocal line. Silent culminations can evoke a sense of mystery or ambiguity by not resolving the tension created. This technique has been used by composers of all musical genres, from classical to jazz, as well as in film scores. Silent culminations can also make a piece of music more dynamic and exciting. Verdi’s Aida is a classic example of the use of silent culminations to create tension and suspense. Throughout the opera, Verdi uses a technique of gradually building to a climax, only to abruptly stop without any resolution. This technique brings out the story's drama and intensity and creates anticipation for the climactic moments. For example, at the end of the second act, Verdi reaches a crescendo of tension as Aida and Radamès swear their undying love for one another, only to stop with a moment of silence. This technique also helps to draw attention to the important moments in the story, such as the duets between Aida and Radamès. By stopping the music just before it resolves, Verdi can create an atmosphere of anticipation and suspense that carries through to the opera's end. Silent culminations are used greatly in Aida and are integral to Verdi’s dramatic style. In his symphonic poem Mazeppa, Tchaikovsky uses silent culminations to emphasize the piece's drama and powerful emotions. The piece begins with a gentle introduction but quickly builds to a powerful climax. Throughout the piece, Tchaikovsky uses silent culminations to create tension and suspense, drawing the listener in and heightening the intensity of the music 2. The most dramatic moment of the piece comes when the music builds to a frantic climax and then suddenly cuts out, leaving the listener hanging in anticipation of what will happen next. This technique creates an intense atmosphere and leaves the listener eager to hear what comes next. In addition, the use of silent culminations helps to emphasize the strong emotions of the piece, such as fear, horror, and despair. By not resolving the tension with a resolution, the listener is left with a feeling of uneasiness and uncertainty that helps to convey the story of Mazeppa’s tragic fate.Keywords: Verdi, Tchaikovsky, opera, culmination
Procedia PDF Downloads 96946 Skin-Dose Mapping for Patients Undergoing Interventional Radiology Procedures: Clinical Experimentations versus a Mathematical Model
Authors: Aya Al Masri, Stefaan Carpentier, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis and ulceration to appear. In order to prevent these deterministic effects, an accurate calculation of the patient skin-dose mapping is essential. For most machines, the 'Dose Area Product (DAP)' and fluoroscopy time are the only information available for the operator. These two parameters are a very poor indicator of the peak skin dose. We developed a mathematical model that reconstructs the magnitude (delivered dose), shape, and localization of each irradiation field on the patient skin. In case of critical dose exceeding, the system generates warning alerts. We present the results of its comparison with clinical studies. Materials and methods: Two series of comparison of the skin-dose mapping of our mathematical model with clinical studies were performed: 1. At a first time, clinical tests were performed on patient phantoms. Gafchromic films were placed on the table of the IR machine under of PMMA plates (thickness = 20 cm) that simulate the patient. After irradiation, the film darkening is proportional to the radiation dose received by the patient's back and reflects the shape of the X-ray field. After film scanning and analysis, the exact dose value can be obtained at each point of the mapping. Four experimentation were performed, constituting a total of 34 acquisition incidences including all possible exposure configurations. 2. At a second time, clinical trials were launched on real patients during real 'Chronic Total Occlusion (CTO)' procedures for a total of 80 cases. Gafchromic films were placed at the back of patients. We performed comparisons on the dose values, as well as the distribution, and the shape of irradiation fields between the skin dose mapping of our mathematical model and Gafchromic films. Results: The comparison between the dose values shows a difference less than 15%. Moreover, our model shows a very good geometric accuracy: all fields have the same shape, size and location (uncertainty < 5%). Conclusion: This study shows that our model is a reliable tool to warn physicians when a high radiation dose is reached. Thus, deterministic effects can be avoided.Keywords: clinical experimentation, interventional radiology, mathematical model, patient's skin-dose mapping.
Procedia PDF Downloads 141945 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model
Authors: Didier Auroux, Vladimir Groza
Abstract:
This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization
Procedia PDF Downloads 317944 Aerosol Radiative Forcing Over Indian Subcontinent for 2000-2021 Using Satellite Observations
Authors: Shreya Srivastava, Sushovan Ghosh, Sagnik Dey
Abstract:
Aerosols directly affect Earth’s radiation budget by scattering and absorbing incoming solar radiation and outgoing terrestrial radiation. While the uncertainty in aerosol radiative forcing (ARF) has decreased over the years, it is still higher than that of greenhouse gas forcing, particularly in the South Asian region, due to high heterogeneity in their chemical properties. Understanding the Spatio-temporal heterogeneity of aerosol composition is critical in improving climate prediction. Studies using satellite data, in-situ and aircraft measurements, and models have investigated the Spatio-temporal variability of aerosol characteristics. In this study, we have taken aerosol data from Multi-angle Imaging Spectro-Radiometer (MISR) level-2 version 23 aerosol products retrieved at 4.4 km and radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 21 years (2000-2021) over the Indian subcontinent. MISR aerosol product includes size and shapes segregated aerosol optical depth (AOD), Angstrom exponent (AE), and single scattering albedo (SSA). Additionally, 74 aerosol mixtures are included in version 23 data that is used for aerosol speciation. We have seasonally mapped aerosol optical and microphysical properties from MISR for India at quarter degrees resolution. Results show strong Spatio-temporal variability, with a constant higher value of AOD for the Indo-Gangetic Plain (IGP). The contribution of small-size particles is higher throughout the year, spatially during winter months. SSA is found to be overestimated where absorbing particles are present. The climatological map of short wave (SW) ARF at the top of the atmosphere (TOA) shows a strong cooling except in only a few places (values ranging from +2.5o to -22.5o). Cooling due to aerosols is higher in the absence of clouds. Higher negative values of ARF are found over the IGP region, given the high aerosol concentration above the region. Surface ARF values are everywhere negative for our study domain, with higher values in clear conditions. The results strongly correlate with AOD from MISR and ARF from CERES.Keywords: aerosol Radiative forcing (ARF), aerosol composition, single scattering albedo (SSA), CERES
Procedia PDF Downloads 54943 Evidence of a Negativity Bias in the Keywords of Scientific Papers
Authors: Kseniia Zviagintseva, Brett Buttliere
Abstract:
Science is fundamentally a problem-solving enterprise, and scientists pay more attention to the negative things, that cause them dissonance and negative affective state of uncertainty or contradiction. While this is agreed upon by philosophers of science, there are few empirical demonstrations. Here we examine the keywords from those papers published by PLoS in 2014 and show with several sentiment analyzers that negative keywords are studied more than positive keywords. Our dataset is the 927,406 keywords of 32,870 scientific articles in all fields published in 2014 by the journal PLOS ONE (collected from Altmetric.com). Counting how often the 47,415 unique keywords are used, we can examine whether those negative topics are studied more than positive. In order to find the sentiment of the keywords, we utilized two sentiment analysis tools, Hu and Liu (2004) and SentiStrength (2014). The results below are for Hu and Liu as these are the less convincing results. The average keyword was utilized 19.56 times, with half of the keywords being utilized only 1 time and the maximum number of uses being 18,589 times. The keywords identified as negative were utilized 37.39 times, on average, with the positive keywords being utilized 14.72 times and the neutral keywords - 19.29, on average. This difference is only marginally significant, with an F value of 2.82, with a p of .05, but one must keep in mind that more than half of the keywords are utilized only 1 time, artificially increasing the variance and driving the effect size down. To examine more closely, we looked at those top 25 most utilized keywords that have a sentiment. Among the top 25, there are only two positive words, ‘care’ and ‘dynamics’, in position numbers 5 and 13 respectively, with all the rest being identified as negative. ‘Diseases’ is the most studied keyword with 8,790 uses, with ‘cancer’ and ‘infectious’ being the second and fourth most utilized sentiment-laden keywords. The sentiment analysis is not perfect though, as the words ‘diseases’ and ‘disease’ are split by taking 1st and 3rd positions. Combining them, they remain as the most common sentiment-laden keyword, being utilized 13,236 times. More than just splitting the words, the sentiment analyzer logs ‘regression’ and ‘rat’ as negative, and these should probably be considered false positives. Despite these potential problems, the effect is apparent, as even the positive keywords like ‘care’ could or should be considered negative, since this word is most commonly utilized as a part of ‘health care’, ‘critical care’ or ‘quality of care’ and generally associated with how to improve it. All in all, the results suggest that negative concepts are studied more, also providing support for the notion that science is most generally a problem-solving enterprise. The results also provide evidence that negativity and contradiction are related to greater productivity and positive outcomes.Keywords: bibliometrics, keywords analysis, negativity bias, positive and negative words, scientific papers, scientometrics
Procedia PDF Downloads 188942 Threshold Sand Detection Limits for Acoustic Monitors in Multiphase Flow
Authors: Vinod Ponnagandla, Brenton McLaury, Siamack Shirazi
Abstract:
Sand production can lead to deposition of particles or erosion. Low production rates resulting in deposition can partially clog systems and cause under deposit corrosion. Commercially available nonintrusive acoustic sand detectors are attractive as they claim to detect sand production. Acoustic sand detectors are used during oil and gas production; however, operators often do not know the threshold detection limits of these devices. It is imperative to know the detection limits to appropriately plan for cleaning of separation equipment or examine risk of erosion. These monitors are based on detecting the acoustic signature of sand as the particles impact the pipe walls. The objective of this work is to determine threshold detection limits for acoustic sand monitors that are commercially available. The minimum threshold sand concentration that can be detected in a pipe are determined as a function of flowing gas and liquid velocities. A large scale flow loop with a 4-inch test section is utilized. Commercially available sand monitors (ClampOn and Roxar) are evaluated for different flow regimes, sand sizes and pipe orientation (vertical and horizontal). The manufacturers’ recommend that the monitors be placed on a bend to maximize the number of particle impacts, so results are shown for monitors placed at 45 and 90 degree positions in a bend. Acoustic sand monitors that clamp to the outside of pipe are passive and listen for solid particle impact noise. The threshold sand rate is calculated by eliminating the background noise created by the flow of gas and liquid in the pipe for various flow regimes that are generated in horizontal and vertical test sections. The average sand sizes examined are 150 and 300 microns. For stratified and bubbly flows the threshold sand rates are much higher than other flow regimes such as slug and annular flow regimes that are investigated. However, the background noise generated by slug flow regime is very high and cause a high uncertainty in detection limits. The threshold sand rates for annular flow and dry gas conditions are the lowest because of high gas velocities. The effects of monitor placement around elbows that are in vertical and horizontal pipes are also examined for 150 micron. The results show that the threshold sand rates that are detected in vertical orientation are generally lower for all various flow regimes that are investigated.Keywords: acoustic monitor, sand, multiphase flow, threshold
Procedia PDF Downloads 409941 Assessment of Soil Contamination on the Content of Macro and Microelements and the Quality of Grass Pea Seeds (Lathyrus sativus L.)
Authors: Violina R. Angelova
Abstract:
Comparative research has been conducted to allow us to determine the content of macro and microelements in the vegetative and reproductive organs of grass pea and the quality of grass pea seeds, as well as to identify the possibility of grass pea growth on soils contaminated by heavy metals. The experiment was conducted on an agricultural field subjected to contamination from the Non-Ferrous-Metal Works (MFMW) near Plovdiv, Bulgaria. The experimental plots were situated at different distances of 0.5 km and 8 km, respectively, from the source of pollution. On reaching commercial ripeness the grass pea plants were gathered. The composition of the macro and microelements in plant materials (roots, stems, leaves, seeds), and the dry matter content, sugars, proteins, fats and ash contained in the grass pea seeds were determined. Translocation factors (TF) and bioaccumulation factor (BCF) were also determined. The quantitative measurements were carried out through inductively-coupled plasma (ICP). The grass pea plant can successfully be grown on soils contaminated by heavy metals. Soil pollution with heavy metals does not affect the quality of the grass pea seeds. The seeds of the grass pea contain significant amounts of nutrients (K, P, Cu, Fe Mn, Zn) and protein (23.18-29.54%). The distribution of heavy metals in the organs of the grass pea has a selective character, which reduces in the following order: leaves > roots > stems > seeds. BCF and TF values were greater than one suggesting efficient accumulation in the above ground parts of grass pea plant. Grass pea is a plant that is tolerant to heavy metals and can be referred to the accumulator plants. The results provide valuable information about the chemical and nutritional composition of the seeds of the grass pea grown on contaminated soils in Bulgaria. The high content of macro and microelements and the low concentrations of toxic elements in the grass pea grown in contaminated soil make it possible to use the seeds of the grass pea as animal feed.Keywords: Lathyrus sativus L, macroelements, microelements, quality
Procedia PDF Downloads 146940 Cytotoxic Activity against MCF-7 Breast Cancer Cells and Antioxidant Property of Aqueous Tempe Extracts from Extended Fermentation
Authors: Zatil Athaillah, Anastasia Devi, Dian Muzdalifah, Wirasuwasti Nugrahani, Linar Udin
Abstract:
During tempe fermentation, some chemical changes occurred and they contributed to sensory, appearance, and health benefits of soybeans. Many studies on health properties of tempe have specialized on their isoflavones. In this study, other components of tempe, particularly water soluble chemicals, was investigated for their biofunctionality. The study was focused on the ability to suppress MCF-7 breast cancer cell growth and antioxidant activity, as expressed by DPPH radical scavenging activity, total phenols and total flavonoids, of the water extracts. Fermentation time of tempe was extended up to 120 hr to increase the possibility to find the functional components. Extraction yield and soluble nitrogen content were also quantified as accompanying data. Our findings suggested that yield of water extraction of tempe increased as fermentation was extended up to 120 hr, except for a slight decrease at 72 hr. Water extracts of tempe showed inhibition of MCF-7 breast cancer cell growth, as shown by lower IC50 values when compared to control (unfermented soybeans). Among the varied fermentation timescales, 60-hr period showed the highest activity (IC50 of 8.7 ± 4.95 µg/ml). The anticancer activity of extracts obtained from different fermentation time was positively correlated with total soluble nitrogens, but less relevant with antioxidant data. During 48-72 hr fermentation, at which cancer suppression activity was significant, the antioxidant properties from the three assays were not higher than control. These findings indicated that water extracts of tempe from extended fermentation could inhibit breast cancer cell growth but further study to determine the mechanism and compounds that play important role in the activity should be conducted.Keywords: tempe, anticancer, antioxidant, phenolic compounds
Procedia PDF Downloads 245939 Topology Optimization of Heat and Mass Transfer for Two Fluids under Steady State Laminar Regime: Application on Heat Exchangers
Authors: Rony Tawk, Boutros Ghannam, Maroun Nemer
Abstract:
Topology optimization technique presents a potential tool for the design and optimization of structures involved in mass and heat transfer. The method starts with an initial intermediate domain and should be able to progressively distribute the solid and the two fluids exchanging heat. The multi-objective function of the problem takes into account minimization of total pressure loss and maximization of heat transfer between solid and fluid subdomains. Existing methods account for the presence of only one fluid, while the actual work extends optimization distribution of solid and two different fluids. This requires to separate the channels of both fluids and to ensure a minimum solid thickness between them. This is done by adding a third objective function to the multi-objective optimization problem. This article uses density approach where each cell holds two local design parameters ranging from 0 to 1, where the combination of their extremums defines the presence of solid, cold fluid or hot fluid in this cell. Finite volume method is used for direct solver coupled with a discrete adjoint approach for sensitivity analysis and method of moving asymptotes for numerical optimization. Several examples are presented to show the ability of the method to find a trade-off between minimization of power dissipation and maximization of heat transfer while ensuring the separation and continuity of the channel of each fluid without crossing or mixing the fluids. The main conclusion is the possibility to find an optimal bi-fluid domain using topology optimization, defining a fluid to fluid heat exchanger device.Keywords: topology optimization, density approach, bi-fluid domain, laminar steady state regime, fluid-to-fluid heat exchanger
Procedia PDF Downloads 400938 The Potential in the Use of Building Information Modelling and Life-Cycle Assessment for Retrofitting Buildings: A Study Based on Interviews with Experts in Both Fields
Authors: Alex Gonzalez Caceres, Jan Karlshøj, Tor Arvid Vik
Abstract:
Life cycle of residential buildings are expected to be several decades, 40% of European residential buildings have inefficient energy conservation measure. The existing building represents 20-40% of the energy use and the CO₂ emission. Since net zero energy buildings are a short-term goal, (should be achieved by EU countries after 2020), is necessary to plan the next logical step, which is to prepare the existing outdated stack of building to retrofit them into an energy efficiency buildings. In order to accomplish this, two specialize and widespread tool can be used Building Information Modelling (BIM) and life-cycle assessment (LCA). BIM and LCA are tools used by a variety of disciplines; both are able to represent and analyze the constructions in different stages. The combination of these technologies could improve greatly the retrofitting techniques. The incorporation of the carbon footprint, introducing a single database source for different material analysis. To this is added the possibility of considering different analysis approaches such as costs and energy saving. Is expected with these measures, enrich the decision-making. The methodology is based on two main activities; the first task involved the collection of data this is accomplished by literature review and interview with experts in the retrofitting field and BIM technologies. The results of this task are presented as an evaluation checklist of BIM ability to manage data and improve decision-making in retrofitting projects. The last activity involves an evaluation using the results of the previous tasks, to check how far the IFC format can support the requirements by each specialist, and its uses by third party software. The result indicates that BIM/LCA have a great potential to improve the retrofitting process in existing buildings, but some modification must be done in order to meet the requirements of the specialists for both, retrofitting and LCA evaluators.Keywords: retrofitting, BIM, LCA, energy efficiency
Procedia PDF Downloads 222937 An Overview of the Porosity Classification in Carbonate Reservoirs and Their Challenges: An Example of Macro-Microporosity Classification from Offshore Miocene Carbonate in Central Luconia, Malaysia
Authors: Hammad T. Janjuhah, Josep Sanjuan, Mohamed K. Salah
Abstract:
Biological and chemical activities in carbonates are responsible for the complexity of the pore system. Primary porosity is generally of natural origin while secondary porosity is subject to chemical reactivity through diagenetic processes. To understand the integrated part of hydrocarbon exploration, it is necessary to understand the carbonate pore system. However, the current porosity classification scheme is limited to adequately predict the petrophysical properties of different reservoirs having various origins and depositional environments. Rock classification provides a descriptive method for explaining the lithofacies but makes no significant contribution to the application of porosity and permeability (poro-perm) correlation. The Central Luconia carbonate system (Malaysia) represents a good example of pore complexity (in terms of nature and origin) mainly related to diagenetic processes which have altered the original reservoir. For quantitative analysis, 32 high-resolution images of each thin section were taken using transmitted light microscopy. The quantification of grains, matrix, cement, and macroporosity (pore types) was achieved using a petrographic analysis of thin sections and FESEM images. The point counting technique was used to estimate the amount of macroporosity from thin section, which was then subtracted from the total porosity to derive the microporosity. The quantitative observation of thin sections revealed that the mouldic porosity (macroporosity) is the dominant porosity type present, whereas the microporosity seems to correspond to a sum of 40 to 50% of the total porosity. It has been proven that these Miocene carbonates contain a significant amount of microporosity, which significantly complicates the estimation and production of hydrocarbons. Neglecting its impact can increase uncertainty about estimating hydrocarbon reserves. Due to the diversity of geological parameters, the application of existing porosity classifications does not allow a better understanding of the poro-perm relationship. However, the classification can be improved by including the pore types and pore structures where they can be divided into macro- and microporosity. Such studies of microporosity identification/classification represent now a major concern in limestone reservoirs around the world.Keywords: overview of porosity classification, reservoir characterization, microporosity, carbonate reservoir
Procedia PDF Downloads 154936 Fused Deposition Modelling as the Manufacturing Method of Fully Bio-Based Water Purification Filters
Authors: Natalia Fijol, Aji P. Mathew
Abstract:
We present the processing and characterisation of three-dimensional (3D) monolith filters based on polylactic acid (PLA) reinforced with various nature-derived nanospecies such as hydroxyapatite, modified cellulose fibers and chitin fibers. The nanospecies of choice were dispersed in PLA through Thermally Induced Phase Separation (TIPS) method. The biocomposites were developed via solvent-assisted blending and the obtained pellets were further single-screw extruded into 3D-printing filaments and processed into various geometries using Fused Deposition Modelling (FDM) technique. The printed prototypes included cubic, cylindrical and hour-glass shapes with diverse patterns of printing infill as well as varying pore structure including uniform and multiple level gradual pore structure. The pores and channel structure as well as overall shape of the prototypes were designed in attempt to optimize the flux and maximize the adsorption-active time. FDM is a cost and energy-efficient method, which does not require expensive tools and elaborated post-processing maintenance. Therefore, FDM offers the possibility to produce customized, highly functional water purification filters with tuned porous structures suitable for removal of wide range of common water pollutants. Moreover, as 3D printing becomes more and more available worldwide, it allows producing portable filters at the place and time where they are most needed. The study demonstrates preparation route for the PLA-based, fully biobased composite and their processing via FDM technique into water purification filters, addressing water treatment challenges on an industrial scale.Keywords: fused deposition modelling, water treatment, biomaterials, 3D printing, nanocellulose, nanochitin, polylactic acid
Procedia PDF Downloads 115935 Scalable and Accurate Detection of Pathogens from Whole-Genome Shotgun Sequencing
Authors: Janos Juhasz, Sandor Pongor, Balazs Ligeti
Abstract:
Next-generation sequencing, especially whole genome shotgun sequencing, is becoming a common approach to gain insight into the microbiomes in a culture-independent way, even in clinical practice. It does not only give us information about the species composition of an environmental sample but opens the possibility to detect antimicrobial resistance and novel, or currently unknown, pathogens. Accurately and reliably detecting the microbial strains is a challenging task. Here we present a sensitive approach for detecting pathogens in metagenomics samples with special regard to detecting novel variants of known pathogens. We have developed a pipeline that uses fast, short read aligner programs (i.e., Bowtie2/BWA) and comprehensive nucleotide databases. Taxonomic binning is based on the lowest common ancestor (LCA) principle; each read is assigned to a taxon, covering the most significantly hit taxa. This approach helps in balancing between sensitivity and running time. The program was tested both on experimental and synthetic data. The results implicate that our method performs as good as the state-of-the-art BLAST-based ones, furthermore, in some cases, it even proves to be better, while running two orders magnitude faster. It is sensitive and capable of identifying taxa being present only in small abundance. Moreover, it needs two orders of magnitude less reads to complete the identification than MetaPhLan2 does. We analyzed an experimental anthrax dataset (B. anthracis strain BA104). The majority of the reads (96.50%) was classified as Bacillus anthracis, a small portion, 1.2%, was classified as other species from the Bacillus genus. We demonstrate that the evaluation of high-throughput sequencing data is feasible in a reasonable time with good classification accuracy.Keywords: metagenomics, taxonomy binning, pathogens, microbiome, B. anthracis
Procedia PDF Downloads 137934 The Impact of Supply Chain Strategy and Integration on Supply Chain Performance: Supply Chain Vulnerability as a Moderator
Authors: Yi-Chun Kuo, Jo-Chieh Lin
Abstract:
The objective of a supply chain strategy is to reduce waste and increase efficiency to attain cost benefits, and to guarantee supply chain flexibility when facing the ever-changing market environment in order to meet customer requirements. Strategy implementation aims to fulfill common goals and attain benefits by integrating upstream and downstream enterprises, sharing information, conducting common planning, and taking part in decision making, so as to enhance the overall performance of the supply chain. With the rise of outsourcing and globalization, the increasing dependence on suppliers and customers and the rapid development of information technology, the complexity and uncertainty of the supply chain have intensified, and supply chain vulnerability has surged, resulting in adverse effects on supply chain performance. Thus, this study aims to use supply chain vulnerability as a moderating variable and apply structural equation modeling (SEM) to determine the relationships among supply chain strategy, supply chain integration, and supply chain performance, as well as the moderating effect of supply chain vulnerability on supply chain performance. The data investigation of this study was questionnaires which were collected from the management level of enterprises in Taiwan and China, 149 questionnaires were received. The result of confirmatory factor analysis shows that the path coefficients of supply chain strategy on supply chain integration and supply chain performance are positive (0.497, t= 4.914; 0.748, t= 5.919), having a significantly positive effect. Supply chain integration is also significantly positively correlated to supply chain performance (0.192, t = 2.273). The moderating effects of supply chain vulnerability on supply chain strategy and supply chain integration to supply chain performance are significant (7.407; 4.687). In Taiwan, 97.73% of enterprises are small- and medium-sized enterprises (SMEs) focusing on receiving original equipment manufacturer (OEM) and original design manufacturer (ODM) orders. In order to meet the needs of customers and to respond to market changes, these enterprises especially focus on supply chain flexibility and their integration with the upstream and downstream enterprises. According to the observation of this research, the effect of supply chain vulnerability on supply chain performance is significant, and so enterprises need to attach great importance to the management of supply chain risk and conduct risk analysis on their suppliers in order to formulate response strategies when facing emergency situations. At the same time, risk management is incorporated into the supply chain so as to reduce the effect of supply chain vulnerability on the overall supply chain performance.Keywords: supply chain integration, supply chain performance, supply chain vulnerability, structural equation modeling
Procedia PDF Downloads 318933 Student Diversity in Higher Education: The Impact of Digital Elements on Student Learning Behavior and Subject-Specific Preferences
Authors: Pia Kastl
Abstract:
By combining face-to-face sessions with digital selflearning units, the learning process can be enhanced and learning success improved. Potentials of blended learning are the flexibility and possibility to get in touch with lecturers and fellow students face-toface. It also offers the opportunity to individualize and self-regulate the learning process. Aim of this article is to analyse how different learning environments affect students’ learning behavior and how digital tools can be used effectively. The analysis also considers the extent to which the field of study affects the students’ preferences. Semi-structured interviews were conducted with students from different disciplines at two German universities (N= 60). The questions addressed satisfaction and perception of online, faceto-face and blended learning courses. In addition, suggestions for improving learning experience and the use of digital tools in the different learning environments were surveyed. The results show that being present on campus has a positive impact on learning success and online teaching facilitates flexible learning. Blended learning can combine the respective benefits, although one challenge is to keep the time investment within reasonable limits. The use of digital tools differs depending on the subject. Medical students are willing to use digital tools to improve their learning success and voluntarily invest more time. Students of the humanities and social sciences, on the other hand, are reluctant to invest additional time. They do not see extra study material as an additional benefit their learning success. This study illustrates how these heterogenous demands on learning environments can be met. In addition, potential for improvement will be identified in order to foster both learning process and learning success. Learning environments can be meaningfully enriched with digital elements to address student diversity in higher education.Keywords: blended learning, higher education, diversity, learning styles
Procedia PDF Downloads 70932 Space Weather and Earthquakes: A Case Study of Solar Flare X9.3 Class on September 6, 2017
Authors: Viktor Novikov, Yuri Ruzhin
Abstract:
The studies completed to-date on a relation of the Earth's seismicity and solar processes provide the fuzzy and contradictory results. For verification of an idea that solar flares can trigger earthquakes, we have analyzed a case of a powerful surge of solar flash activity early in September 2017 during approaching the minimum of 24th solar cycle was accompanied by significant disturbances of space weather. On September 6, 2017, a group of sunspots AR2673 generated a large solar flare of X9.3 class, the strongest flare over the past twelve years. Its explosion produced a coronal mass ejection partially directed towards the Earth. We carried out a statistical analysis of the catalogs of earthquakes USGS and EMSC for determination of the effect of solar flares on global seismic activity. New evidence of earthquake triggering due to the Sun-Earth interaction has been demonstrated by simple comparison of behavior of Earth's seismicity before and after the strong solar flare. The global number of earthquakes with magnitude of 2.5 to 5.5 within 11 days after the solar flare has increased by 30 to 100%. A possibility of electric/electromagnetic triggering of earthquake due to space weather disturbances is supported by results of field and laboratory studies, where the earthquakes (both natural and laboratory) were initiated by injection of electrical current into the Earth crust. For the specific case of artificial electric earthquake triggering the current density at a depth of earthquake, sources are comparable with estimations of a density of telluric currents induced by variation of space weather conditions due to solar flares. Acknowledgment: The work was supported by RFBR grant No. 18-05-00255.Keywords: solar flare, earthquake activity, earthquake triggering, solar-terrestrial relations
Procedia PDF Downloads 144931 Study of a Cross-Flow Membrane to a Kidney Encapsulation Engineering Structures for Immunosuppression Filter
Authors: Sihyun Chae, Ryoto Arai, Waldo Concepcion, Paula Popescu
Abstract:
The kidneys perform an important role in the human hormones that regulate the blood pressure, produce an active form of vitamin D and control the production of red blood cells. Kidney disease can cause health problems, such as heart disease. Also, increase the chance of having a stroke or heart attack. There are mainly to types of treatments for kidney disease, dialysis, and kidney transplant. For a better quality of life, the kidney transplant is desirable. However, kidney transplant can cause antibody reaction and patients’ body would be attacked by immune system of their own. For solving that issue, patients with transplanted kidney always take immunosuppressive drugs which can hurt kidney as side effects. Patients willing to do a kidney transplant have a waiting time of 3.6 years in average searching to find an appropriate kidney, considering there are almost 96,380 patients waiting for kidney transplant. There is a promising method to solve these issues: bioartificial kidney. Our membrane is specially designed with unique perforations capable to filter the blood cells separating the white blood cells from red blood cells. White blood cells will not pass through the encapsulated kidney preventing the immune system to attack the new organ and eliminating the need of a matching donor. It is possible to construct life-time long encapsulation without needing pumps or a power supply on the cell’s separation method preventing futures surgeries due the Cross-Channel Flow inside the device. This technology allows the possibility to use an animal kidney, prevent cancer cells to spread through the body, arm and leg transplants in the future. This project aims to improve the quality of life of patients with kidney disease.Keywords: kidney encapsulation, immunosuppression filter, leukocyte filter, leukocyte
Procedia PDF Downloads 201930 An Educational Application of Online Games for Learning Difficulties
Authors: Maria Margoudi, Zacharoula Smyraniou
Abstract:
The current paper presents the results of a conducted case study, which was part of the author’s master thesis. During the past few years the number of children diagnosed with Learning Difficulties has drastically augmented and especially the cases of ADHD (Attention Deficit Hyperactivity Disorder). One of the core characteristics of ADHD is a deficit in working memory functions. The review of the literature indicates a plethora of educational software that aim at training and enhancing the working memory. Nevertheless, in the current paper, the possibility of using for the same purpose free, online games will be explored. Another issue of interest is the potential effect of the working memory training to the core symptoms of ADHD. In order to explore the abovementioned research questions, three digital tests are employed, all of which are developed on the E-slate platform by the author, in order to check the level of ADHD’s symptoms and to be used as diagnostic tools, both in the beginning and in the end of the case study. The tools used during the main intervention of the research are free online games for the training of working memory. The research and the data analysis focus on the following axes: a) the presence and the possible change in two of the core symptoms of ADHD, attention and impulsivity and b) a possible change in the general cognitive abilities of the individual. The case study was conducted with the participation of a thirteen year-old, female student, diagnosed with ADHD, during after-school hours. The results of the study indicate positive changes both in the levels of attention and impulsivity. Therefore we conclude that the training of working memory through the use of free, online games has a positive impact on the characteristics of ADHD. Finally, concerning the second research question, the change in general cognitive abilities, no significant changes were noted.Keywords: ADHD, attention, impulsivity, online games
Procedia PDF Downloads 358929 Characterization of Tailings From Traditional Panning of Alluvial Gold Ore (A Case Study of Ilesa - Southwestern Nigeria Goldfield Tailings Dumps)
Authors: Olaniyi Awe, Adelana R. Adetunji, Abraham Adeleke
Abstract:
Field observation revealed a lot of artisanal gold mining activities in Ilesa gold belt of southwestern Nigeria. The possibility of alluvial and lode gold deposits in commercial quantities around this location is very high, as there are many resident artisanal gold miners who have been mining and trading alluvial gold ore for decades and to date in the area. Their major process of solid gold recovery from its ore is by gravity concentration using the convectional panning method. This method is simple to learn and fast to recover gold from its alluvial ore, but its effectiveness is based on rules of thumb and the artisanal miners' experience in handling gold ore panning tool while processing the ore. Research samples from five alluvial gold ore tailings dumps were collected and studied. Samples were subjected to particle size analysis and mineralogical and elemental characterization using X-Ray Diffraction (XRD) and Particle-Induced X-ray Emission (PIXE) methods, respectively. The results showed that the tailings were of major quartz in association with albite, plagioclase, mica, gold, calcite and sulphide minerals. The elemental composition analysis revealed a 15ppm of gold concentration in particle size fraction of -90 microns in one of the tailings dumps investigated. These results are significant. It is recommended that heaps of panning tailings should be further reprocessed using other gold recovery methods such as shaking tables, flotation and controlled cyanidation that can efficiently recover fine gold particles that were previously lost into the gold panning tailings. The tailings site should also be well controlled and monitored so that these heavy minerals do not find their way into surrounding water streams and rivers, thereby causing health hazards.Keywords: gold ore, panning, PIXE, tailings, XRD
Procedia PDF Downloads 90928 Energy Storage Modelling for Power System Reliability and Environmental Compliance
Authors: Rajesh Karki, Safal Bhattarai, Saket Adhikari
Abstract:
Reliable and economic operation of power systems are becoming extremely challenging with large scale integration of renewable energy sources due to the intermittency and uncertainty associated with renewable power generation. It is, therefore, important to make a quantitative risk assessment and explore the potential resources to mitigate such risks. Probabilistic models for different energy storage systems (ESS), such as the flywheel energy storage system (FESS) and the compressed air energy storage (CAES) incorporating specific charge/discharge performance and failure characteristics suitable for probabilistic risk assessment in power system operation and planning are presented in this paper. The proposed methodology used in FESS modelling offers flexibility to accommodate different configurations of plant topology. It is perceived that CAES has a high potential for grid-scale application, and a hybrid approach is proposed, which embeds a Monte-Carlo simulation (MCS) method in an analytical technique to develop a suitable reliability model of the CAES. The proposed ESS models are applied to a test system to investigate the economic and reliability benefits of the energy storage technologies in system operation and planning, as well as to assess their contributions in facilitating wind integration during different operating scenarios. A comparative study considering various storage system topologies are also presented. The impacts of failure rates of the critical components of ESS on the expected state of charge (SOC) and the performance of the different types of ESS during operation are illustrated with selected studies on the test system. The paper also applies the proposed models on the test system to investigate the economic and reliability benefits of the different ESS technologies and to evaluate their contributions in facilitating wind integration during different operating scenarios and system configurations. The conclusions drawn from the study results provide valuable information to help policymakers, system planners, and operators in arriving at effective and efficient policies, investment decisions, and operating strategies for planning and operation of power systems with large penetrations of renewable energy sources.Keywords: flywheel energy storage, compressed air energy storage, power system reliability, renewable energy, system planning, system operation
Procedia PDF Downloads 133927 Design and Optimization of an Electromagnetic Vibration Energy Converter
Authors: Slim Naifar, Sonia Bradai, Christian Viehweger, Olfa Kanoun
Abstract:
Vibration provides an interesting source of energy since it is available in many indoor and outdoor applications. Nevertheless, in order to have an efficient design of the harvesting system, vibration converters have to satisfy some criterion in terms of robustness, compactness and energy outcome. In this work, an electromagnetic converter based on mechanical spring principle is proposed. The designed harvester is formed by a coil oscillating around ten ring magnets using a mechanical spring. The proposed design overcomes one of the main limitation of the moving coil by avoiding the contact between the coil wires with the mechanical spring which leads to a better robustness for the converter. In addition, the whole system can be implemented in a cavity of a screw. Different parameters in the harvester were investigated by finite element method including the magnet size, the coil winding number and diameter and the excitation frequency and amplitude. A prototype was realized and tested. Experiments were performed for 0.5 g to 1 g acceleration. The used experimental setup consists of an electrodynamic shaker as an external artificial vibration source controlled by a laser sensor to measure the applied displacement and frequency excitation. Together with the laser sensor, a controller unit, and an amplifier, the shaker is operated in a closed loop which allows controlling the vibration amplitude. The resonance frequency of the proposed designs is in the range of 24 Hz. Results indicate that the harvester can generate 612 mV and 1150 mV maximum open circuit peak to peak voltage at resonance for 0.5 g and 1 g acceleration respectively which correspond to 4.75 mW and 1.34 mW output power. Tuning the frequency to other values is also possible due to the possibility to add mass to the moving part of the or by changing the mechanical spring stiffness.Keywords: energy harvesting, electromagnetic principle, vibration converter, moving coil
Procedia PDF Downloads 298