Search results for: extraordinary event
101 The Duty of Sea Carrier to Transship the Cargo in Case of Vessel Breakdown
Authors: Mojtaba Eshraghi Arani
Abstract:
Concluding the contract for carriage of cargo with the shipper (through bill of lading or charterparty), the carrier must transport the cargo from loading port to the port of discharge and deliver it to the consignee. Unless otherwise agreed in the contract, the carrier must avoid from any deviation, transfer of cargo to another vessel or unreasonable stoppage of carriage in-transit. However, the vessel might break down in-transit for any reason and becomes unable to continue its voyage to the port of discharge. This is a frequent incident in the carriage of goods by sea which leads to important dispute between the carrier/owner and the shipper/charterer (hereinafter called “cargo interests”). It is a generally accepted rule that in such event, the carrier/owner must repair the vessel after which it will continue its voyage to the destination port. The dispute will arise in the case that temporary repair of the vessel cannot be done in the short or reasonable term. There are two options for the contract parties in such a case: First, the carrier/owner is entitled to repair the vessel while having the cargo onboard or discharged in the port of refugee, and the cargo interests must wait till the breakdown is rectified at any time, whenever. Second, the carrier/owner will be responsible to charter another vessel and transfer the entirety of cargo to the substitute vessel. In fact, the main question revolves around the duty of carrier/owner to perform transfer of cargo to another vessel. Such operation which is called “trans-shipment” or “transhipment” (in terms of the oil industry it is usually called “ship-to-ship” or “STS”) needs to be done carefully and with due diligence. In fact, the transshipment operation for various cargoes might be different as each cargo requires its own suitable equipment for transfer to another vessel, so this operation is often costly. Moreover, there is a considerable risk of collision between two vessels in particular in bulk carriers. Bulk cargo is also exposed to the shortage and partial loss in the process of transshipment especially during bad weather. Concerning tankers which carry oil and petrochemical products, transshipment, is most probably followed by sea pollution. On the grounds of the above consequences, the owners are afraid of being held responsible for such operation and are reluctant to perform in the relevant disputes. The main argument raised by them is that no regulation has recognized such duty upon their shoulders so any such operation must be done under the auspices of the cargo interests and all costs must be reimbursed by themselves. Unfortunately, not only the international conventions including Hague rules, Hague-Visby Rules, Hamburg rules and Rotterdam rules but also most domestic laws are silent in this regard. The doctrine has yet to analyse the issue and no legal researches was found out in this regard. A qualitative method with the concept of interpretation of data collection has been used in this paper. The source of the data is the analysis of regulations and cases. It is argued in this article that the paramount rule in the maritime law is “the accomplishment of the voyage” by the carrier/owner in view of which, if the voyage can only be finished by transshipment, then the carrier/owner will be responsible to carry out this operation. The duty of carrier/owner to apply “due diligence” will strengthen this reasoning. Any and all costs and expenses will also be on the account pf the owner/carrier, unless the incident is attributable to any cause arising from the cargo interests’ negligence.Keywords: cargo, STS, transshipment, vessel, voyage
Procedia PDF Downloads 119100 Emerging Issues for Global Impact of Foreign Institutional Investors (FII) on Indian Economy
Authors: Kamlesh Shashikant Dave
Abstract:
The global financial crisis is rooted in the sub-prime crisis in U.S.A. During the boom years, mortgage brokers attracted by the big commission, encouraged buyers with poor credit to accept housing mortgages with little or no down payment and without credit check. A combination of low interest rates and large inflow of foreign funds during the booming years helped the banks to create easy credit conditions for many years. Banks lent money on the assumptions that housing price would continue to rise. Also the real estate bubble encouraged the demand for houses as financial assets .Banks and financial institutions later repackaged these debts with other high risk debts and sold them to worldwide investors creating financial instruments called collateral debt obligations (CDOs). With the rise in interest rate, mortgage payments rose and defaults among the subprime category of borrowers increased accordingly. Through the securitization of mortgage payments, a recession developed in the housing sector and consequently it was transmitted to the entire US economy and rest of the world. The financial credit crisis has moved the US and the global economy into recession. Indian economy has also affected by the spill over effects of the global financial crisis. Great saving habit among people, strong fundamentals, strong conservative and regulatory regime have saved Indian economy from going out of gear, though significant parts of the economy have slowed down. Industrial activity, particularly in the manufacturing and infrastructure sectors decelerated. The service sector too, slow in construction, transport, trade, communication, hotels and restaurants sub sectors. The financial crisis has some adverse impact on the IT sector. Exports had declined in absolute terms in October. Higher inputs costs and dampened demand have dented corporate margins while the uncertainty surrounding the crisis has affected business confidence. To summarize, reckless subprime lending, loose monetary policy of US, expansion of financial derivatives beyond acceptable norms and greed of Wall Street has led to this exceptional global financial and economic crisis. Thus, the global credit crisis of 2008 highlights the need to redesign both the global and domestic financial regulatory systems not only to properly address systematic risk but also to support its proper functioning (i.e financial stability).Such design requires: 1) Well managed financial institutions with effective corporate governance and risk management system 2) Disclosure requirements sufficient to support market discipline. 3)Proper mechanisms for resolving problem institution and 4) Mechanisms to protect financial services consumers in the event of financial institutions failure.Keywords: FIIs, BSE, sensex, global impact
Procedia PDF Downloads 44199 Telogen Effluvium: A Modern Hair Loss Concern and the Interventional Strategies
Authors: Chettyparambil Lalchand Thejalakshmi, Sonal Sabu Edattukaran
Abstract:
Hair loss is one of the main issues that contemporary society is dealing with. It can be attributable to a wide range of factors, listing from one's genetic composition and the anxiety we experience on a daily basis. Telogen effluvium [TE] is a condition that causes temporary hair loss after a stressor that might shock the body and cause the hair follicles to temporarily rest, leading to hair loss. Most frequently, women are the ones who bring up these difficulties. Extreme illness or trauma, an emotional or important life event, rapid weight loss and crash dieting, a severe scalp skin problem, a new medication, or ceasing hormone therapy are examples of potential causes. Men frequently do not notice hair thinning with time, but women with long hair may be easily identified when shedding, which can occasionally result in bias because women tend to be more concerned with aesthetics and beauty standards of the society, and approach frequently with the concerns .The woman, who formerly possessed a full head of hair, is worried about the hair loss from her scalp . There are several cases of hair loss reported every day, and Telogen effluvium is said to be the most prevalent one of them all without any hereditary risk factors. While the patient has loss in hair volume, baldness is not the result of this problem . The exponentially growing Dermatology and Aesthetic medical division has discovered that this problem is the most common and also the easiest to cure since it is feasible for these people to regrow their hair, unlike those who have scarring alopecia, in which the follicle itself is damaged and non-viable. Telogen effluvium comes in two different forms: acute and chronic. Acute TE occurs in all the age groups with a hair loss of less than three months, while chronic TE is more common in those between the ages of 30 and 60 with a hair loss of more than six months . Both kinds are prevalent throughout all age groups, regardless of the predominance. It takes between three and six months for the lost hair to come back, although this condition is readily reversed by eliminating stresses. After shedding their hair, patients frequently describe having noticeable fringes on their forehead. The current medical treatments for this condition include topical corticosteroids, systemic corticosteroids, minoxidil and finasteride, CNDPA (caffeine, niacinamide, panthenol, dimethicone, and an acrylate polymer) .Individual terminal hair growth was increased by 10% as a result of the innovative intervention CNDPA. Botulinum Toxin A, Scalp Micro Needling, Platelet Rich Plasma Therapy [PRP], and sessions with Multivitamin Mesotherapy Injections are some recently enhanced techniques with partially or completely reversible hair loss. Also, it has been shown that supplements like Nutrafol and Biotin are producing effective outcomes. There is virtually little evidence to support the claim that applying sulfur-rich ingredients to the scalp, such as onion juice, can help TE patients' hair regenerate.Keywords: dermatology, telogen effluvium, hair loss, modern hair loass treatments
Procedia PDF Downloads 9098 Monitoring Future Climate Changes Pattern over Major Cities in Ghana Using Coupled Modeled Intercomparison Project Phase 5, Support Vector Machine, and Random Forest Modeling
Authors: Stephen Dankwa, Zheng Wenfeng, Xiaolu Li
Abstract:
Climate change is recently gaining the attention of many countries across the world. Climate change, which is also known as global warming, referring to the increasing in average surface temperature has been a concern to the Environmental Protection Agency of Ghana. Recently, Ghana has become vulnerable to the effect of the climate change as a result of the dependence of the majority of the population on agriculture. The clearing down of trees to grow crops and burning of charcoal in the country has been a contributing factor to the rise in temperature nowadays in the country as a result of releasing of carbon dioxide and greenhouse gases into the air. Recently, petroleum stations across the cities have been on fire due to this climate changes and which have position Ghana in a way not able to withstand this climate event. As a result, the significant of this research paper is to project how the rise in the average surface temperature will be like at the end of the mid-21st century when agriculture and deforestation are allowed to continue for some time in the country. This study uses the Coupled Modeled Intercomparison Project phase 5 (CMIP5) experiment RCP 8.5 model output data to monitor the future climate changes from 2041-2050, at the end of the mid-21st century over the ten (10) major cities (Accra, Bolgatanga, Cape Coast, Koforidua, Kumasi, Sekondi-Takoradi, Sunyani, Ho, Tamale, Wa) in Ghana. In the models, Support Vector Machine and Random forest, where the cities as a function of heat wave metrics (minimum temperature, maximum temperature, mean temperature, heat wave duration and number of heat waves) assisted to provide more than 50% accuracy to predict and monitor the pattern of the surface air temperature. The findings identified were that the near-surface air temperature will rise between 1°C-2°C (degrees Celsius) over the coastal cities (Accra, Cape Coast, Sekondi-Takoradi). The temperature over Kumasi, Ho and Sunyani by the end of 2050 will rise by 1°C. In Koforidua, it will rise between 1°C-2°C. The temperature will rise in Bolgatanga, Tamale and Wa by 0.5°C by 2050. This indicates how the coastal and the southern part of the country are becoming hotter compared with the north, even though the northern part is the hottest. During heat waves from 2041-2050, Bolgatanga, Tamale, and Wa will experience the highest mean daily air temperature between 34°C-36°C. Kumasi, Koforidua, and Sunyani will experience about 34°C. The coastal cities (Accra, Cape Coast, Sekondi-Takoradi) will experience below 32°C. Even though, the coastal cities will experience the lowest mean temperature, they will have the highest number of heat waves about 62. Majority of the heat waves will last between 2 to 10 days with the maximum 30 days. The surface temperature will continue to rise by the end of the mid-21st century (2041-2050) over the major cities in Ghana and so needs to be addressed to the Environmental Protection Agency in Ghana in order to mitigate this problem.Keywords: climate changes, CMIP5, Ghana, heat waves, random forest, SVM
Procedia PDF Downloads 20097 The Structural Alteration of DNA Native Structure of Staphylococcus aureus Bacteria by Designed Quinoxaline Small Molecules Result in Their Antibacterial Properties
Authors: Jeet Chakraborty, Sanjay Dutta
Abstract:
Antibiotic resistance by bacteria has proved to be a severe threat to mankind in recent times, and this fortifies an urgency to design and develop potent antibacterial small molecules/compounds with nonconventional mechanisms than the conventional ones. DNA carries the genetic signature of any organism, and bacteria maintain their genomic DNA inside the cell in a well-regulated compact form with the help of various nucleoid associated proteins like HU, HNS, etc. These proteins control various fundamental processes like gene expression, replication, etc., inside the cell. Alteration of the native DNA structure of bacteria can lead to severe consequences in cellular processes inside the bacterial cell that ultimately result in the death of the organism. The change in the global DNA structure by small molecules initiates a plethora of cellular responses that have not been very well investigated. Echinomycin and Triostin-A are biologically active Quinoxaline small molecules that typically consist of a quinoxaline chromophore attached with an octadepsipeptide ring. They bind to double-stranded DNA in a sequence-specific way and have high activity against a wide variety of bacteria, mainly against Gram-positive ones. To date, few synthetic quinoxaline scaffolds were synthesized, displaying antibacterial potential against a broad scale of pathogenic bacteria. QNOs (Quinoxaline N-oxides) are known to target DNA and instigate reactive oxygen species (ROS) production in bacteria, thereby exhibiting antibacterial properties. The divergent role of Quinoxaline small molecules in medicinal research qualifies them for the evaluation of their antimicrobial properties as a potential candidate. The previous study from our lab has given new insights on a 6-nitroquinoxaline derivative 1d as an intercalator of DNA, which induces conformational changes in DNA upon binding.7 The binding event observed was dependent on the presence of a crucial benzyl substituent on the quinoxaline moiety. This was associated with a large induced CD (ICD) appearing in a sigmoidal pattern upon the interaction of 1d with dsDNA. The induction of DNA superstructures by 1d at high Drug:DNA ratios was observed that ultimately led to DNA condensation. Eviction of invitro-assembled nucleosome upon treatment with a high dose of 1d was also observed. In this work, monoquinoxaline derivatives of 1d were synthesized by various modifications of the 1d scaffold. The set of synthesized 6-nitroquinoxaline derivatives along with 1d were all subjected to antibacterial evaluation across five different bacteria species. Among the compound set, 3a displayed potent antibacterial activity against Staphylococcus aureus bacteria. 3a was further subjected to various biophysical studies to check whether the DNA structural alteration potential was still intact. The biological response of S. aureus cells upon treatment with 3a was studied using various cell biology processes, which led to the conclusion that 3d can initiate DNA damage in the S. aureus cells. Finally, the potential of 3a in disrupting preformed S.aureus and S.epidermidis biofilms was also studied.Keywords: DNA structural change, antibacterial, intercalator, DNA superstructures, biofilms
Procedia PDF Downloads 16996 A Smart Sensor Network Approach Using Affordable River Water Level Sensors
Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan
Abstract:
Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.Keywords: smart sensing, internet of things, water level sensor, flooding
Procedia PDF Downloads 38195 Safety Profile of Human Papillomavirus Vaccines: A Post-Licensure Analysis of the Vaccine Adverse Events Reporting System, 2007-2017
Authors: Giulia Bonaldo, Alberto Vaccheri, Ottavio D'Annibali, Domenico Motola
Abstract:
The Human Papilloma Virus (HPV) was shown to be the cause of different types of carcinomas, first of all of the cervical intraepithelial neoplasia. Since the early 80s to today, thanks first to the preventive screening campaigns (pap-test) and following to the introduction of HPV vaccines on the market; the number of new cases of cervical cancer has decreased significantly. The HPV vaccines currently approved are three: Cervarix® (HPV2 - virus type: 16 and 18), Gardasil® (HPV4 - 6, 11, 16, 18) and Gardasil 9® (HPV9 - 6, 11, 16, 18, 31, 33, 45, 52, 58), which all protect against the two high-risk HPVs (6, 11) that are mainly involved in cervical cancers. Despite the remarkable effectiveness of these vaccines has been demonstrated, in the recent years, there have been many complaints about their risk-benefit profile due to Adverse Events Following Immunization (AEFI). The purpose of this study is to provide a support about the ongoing discussion on the safety profile of HPV vaccines based on real life data deriving from spontaneous reports of suspected AEFIs collected in the Vaccine Adverse Events Reporting System (VAERS). VAERS is a freely-available national vaccine safety surveillance database of AEFI, co-administered by the Centers for Disease Control and Prevention (CDC) and Food and Drug Administration (FDA). We collected all the reports between January 2007 to December 2017 related to the HPV vaccines with a brand name (HPV2, HPV4, HPV9) or without (HPVX). A disproportionality analysis using Reporting Odds Ratio (ROR) with 95% confidence interval and p value ≤ 0.05 was performed. Over the 10-year period, 54889 reports of AEFI related to HPV vaccines reported in VAERS, corresponding to 224863 vaccine-event pairs, were retrieved. The highest number of reports was related to Gardasil (n = 42244), followed by Gardasil 9 (7212) and Cervarix (3904). The brand name of the HPV vaccine was not reported in 1529 cases. The two events more frequently reported and statistically significant for each vaccine were: dizziness (n = 5053) ROR = 1.28 (CI95% 1.24 – 1.31) and syncope (4808) ROR = 1.21 (1.17 – 1.25) for Gardasil. For Gardasil 9, injection site pain (305) ROR = 1.40 (1.25 – 1.57) and injection site erythema (297) ROR = 1.88 (1.67 – 2.10) and for Cervarix, headache (672) ROR = 1.14 (1.06 – 1.23) and loss of consciousness (528) ROR = 1.71 (1.57 – 1.87). In total, we collected 406 reports of death and 2461 cases of permanent disability in the ten-year period. The events consisting of incorrect vaccine storage or incorrect administration were not considered. The AEFI analysis showed that the most frequently reported events are non-serious and listed in the corresponding SmPCs. In addition to these, potential safety signals arose regarding less frequent and severe AEFIs that would deserve further investigation. This already happened with the referral of the European Medicines Agency (EMA) for the adverse events POTS (Postural Orthostatic Tachycardia Syndrome) and CRPS (Complex Regional Pain Syndrome) associated with anti-papillomavirus vaccines.Keywords: adverse drug reactions, pharmacovigilance, safety, vaccines
Procedia PDF Downloads 16394 Theoretical Study on the Visible-Light-Induced Radical Coupling Reactions Mediated by Charge Transfer Complex
Authors: Lishuang Ma
Abstract:
Charge transfer (CT) complex, also known as Electron donor-acceptor (EDA) complex, has received attentions increasingly in the field of synthetic chemistry community, due to the CT complex can absorb the visible light through the intermolecular charge transfer excited states, various of catalyst-free photochemical transformations under mild visible-light conditions. However, a number of fundamental questions are still ambiguous, such as the origin of visible light absorption, the photochemical and photophysical properties of the CT complex, as well as the detailed mechanism of the radical coupling pathways mediated by CT complex. Since these are critical factors for target-specific design and synthesis of more new-type CT complexes. To this end, theoretical investigations were performed in our group to answer these questions based on multiconfigurational perturbation theory. The photo-induced fluoroalkylation reactions are mediated by CT complexes, which are formed by the association of an acceptor of perfluoroalkyl halides RF−X (X = Br, I) and a suitable donor molecule such as β-naphtholate anion, were chosen as a paradigm example in this work. First, spectrum simulations were carried out by both CASPT2//CASSCF/PCM and TD-DFT/PCM methods. The computational results showed that the broadening spectra in visible light range (360-550nm) of the CT complexes originate from the 1(σπ*) excitation, accompanied by an intermolecular electron transfer, which was also found closely related to the aggregate states of the donor and acceptor. Moreover, from charge translocation analysis, the CT complex that showed larger charge transfer in the round state would exhibit smaller charge transfer in excited stated of 1(σπ*), causing blue shift relatively. Then, the excited-state potential energy surface (PES) was calculated at CASPT2//CASSCF(12,10)/ PCM level of theory to explore the photophysical properties of the CT complexes. The photo-induced C-X (X=I, Br) bond cleavage was found to occur in the triplet state, which is accessible through a fast intersystem crossing (ISC) process that is controlled by the strong spin-orbit coupling resulting from the heavy iodine and bromine atoms. Importantly, this rapid fragmentation process can compete and suppress the backward electron transfer (BET) event, facilitating the subsequent effective photochemical transformations. Finally, the reaction pathways of the radical coupling were also inspected, which showed that the radical chain propagation pathway could easy to accomplish with a small energy barrier no more than 3.0 kcal/mol, which is the key factor that promote the efficiency of the photochemical reactions induced by CT complexes. In conclusion, theoretical investigations were performed to explore the photophysical and photochemical properties of the CT complexes, as well as the mechanism of radical coupling reactions mediated by CT complex. The computational results and findings in this work can provide some critical insights into mechanism-based design for more new-type EDA complexesKeywords: charge transfer complex, electron transfer, multiconfigurational perturbation theory, radical coupling
Procedia PDF Downloads 14393 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder
Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada
Abstract:
From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation
Procedia PDF Downloads 18892 Remote Radiation Mapping Based on UAV Formation
Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov
Abstract:
High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation
Procedia PDF Downloads 9991 The Effect of Environmental Assessment Learning in Evacuation Centers on the COVID-19 Situation
Authors: Hiromi Kawasaki, Satoko Yamasaki, Mika Iwasa, Tomoko Iki, Akiko Takaki
Abstract:
In basic nursing, the conditions necessary for maintaining human health -temperature, humidity, illumination, distance from others, noise, moisture, meals, and excretion- were explained. Nursing students often think of these conditions in the context of a hospital room. In order to make students think of these conditions in terms of an environment necessary for maintaining health and preventing illness for residents, in the third year of community health nursing, students learned how to assess and improve the environment -particularly via the case of shelters in the event of a disaster. The importance of environmental management has increased in 2020 as a preventive measure against COVID-19 infection. We verified the effect of the lessons, which was decided to be conducted through distance learning. Sixty third-year nursing college students consented to participate in this study. Environmental standard knowledge for conducting environmental assessment was examined before and after class, and the percentage of correct answers was compared. The χ² test was used for the test, with a 5% significance level employed. Measures were evaluated via a report submitted by the students after class. Student descriptions were analyzed both qualitatively and descriptively with respect to expected health problems and suggestions for improvement. Students have already learned about the environment in terms of basic nursing in their second year. The correct answers for external environmental values concerning interpersonal distance, illumination, noise, and room temperature (p < 0.001) increased significantly after taking the class. Humidity was registered 83.3% before class and 93.3% after class (p = 0.077). Regarding the body, the percentage of students who answered correctly was 70% or more, both before and after the class. The students’ reports included overcrowding, high humidity/high temperature, and the number of toilets as health hazards. Health disorders to be prevented were heat stroke, infectious diseases, and economy class syndrome; improvement methods were recommended for hyperventilation, stretching, hydration, and waiting at home. After the public health nursing class, the students were able to not only propose environmental management of a hospital room but also had an understanding of the environment in terms of the lives of individuals, environmental assessment, and solutions to health problems. The response rate for basic items learned in the second year was already high before and after class, and interpersonal distance and ventilation were described by students. Students were able to use what they learned in basic nursing about the standards of the human mind and body. In the external environment, the memory of specific numerical values was ambiguous. The environment of the hospital room is controlled, and interest in numerical values may decrease. Nursing staff needs to maintain and improve human health as well as hospital rooms. With COVID-19, it was thought that students would continue to not only consider this point in reference to hospital rooms but also in regard to places where people gather. Even in distance learning, students were able to learn the important issues and lessons.Keywords: environmental assessment, evacuation center, nursing education, nursing students
Procedia PDF Downloads 10290 Installation of an Inflatable Bladder and Sill Walls for Riverbank Erosion Protection and Improved Water Intake Zone Smokey Hill River – Salina, Kansas
Authors: Jeffrey A. Humenik
Abstract:
Environmental, Limited Liability Corporation (EMR) provided civil construction services to the U.S. Army Corps of Engineers, Kansas City District, for the placement of a protective riprap blanket on the west bank of the Smoky Hill River, construction of 2 shore abutments and the construction of a 140 foot long sill wall spanning the Smoky Hill River in Salina, Kansas. The purpose of the project was to protect the riverbank from erosion and hold back water to a specified elevation, creating a pool to ensure adequate water intake for the municipal water supply. Geotextile matting and riprap were installed for streambank erosion protection. An inflatable bladder (AquaDam®) was designed to the specific river dimension and installed to divert the river and allow for dewatering during the construction of the sill walls and cofferdam. AquaDam® consists of water filled polyethylene tubes to create aqua barriers and divert water flow or prevent flooding. A challenge of the project was the fact that 100% of the sill wall was constructed within an active river channel. The threat of flooding of the work area, damage to the aqua dam by debris, and potential difficulty of water removal presented a unique set of challenges to the construction team. Upon completion of the West Sill Wall, floating debris punctured the AquaDam®. The manufacturing and delivery of a new AquaDam® would delay project completion by at least 6 weeks. To keep the project ahead of schedule, the decision was made to construct an earthen cofferdam reinforced with rip rap for the construction of the East Abutment and East Sill Wall section. During construction of the west sill wall section, a deep scour hole was encountered in the wall alignment that prevented EMR from using the natural rock formation as a concrete form for the lower section of the sill wall. A formwork system was constructed, that allowed the west sill wall section to be placed in two horizontal lifts of concrete poured on separate occasions. The first sectional lift was poured to fill in the scour hole and act as a footing for the second sectional lift. Concrete wall forms were set on the first lift and anchored to the surrounding riverbed in a manner that the second lift was poured in a similar fashion as a basement wall. EMR’s timely decision to keep the project moving toward completion in the face of changing conditions enabled project completion two (2) months ahead of schedule. The use of inflatable bladders is an effective and cost-efficient technology to divert river flow during construction. However, a secondary plan should be part of project design in the event debris transported by river punctures or damages the bladders.Keywords: abutment, AquaDam®, riverbed, scour
Procedia PDF Downloads 15489 Analysis of the Potential of Biomass Residues for Energy Production and Applications in New Materials
Authors: Sibele A. F. Leite, Bernno S. Leite, José Vicente H. D´Angelo, Ana Teresa P. Dell’Isola, Julio CéSar Souza
Abstract:
The generation of bioenergy is one of the oldest and simplest biomass applications and is one of the safest options for minimizing emissions of greenhouse gasses and replace the use of fossil fuels. In addition, the increasing development of technologies for energy biomass conversion parallel to the advancement of research in biotechnology and engineering has enabled new opportunities for exploitation of biomass. Agricultural residues offer great potential for energy use, and Brazil is in a prominent position in the production and export of agricultural products such as banana and rice. Despite the economic importance of the growth prospects of these activities and the increasing of the agricultural waste, they are rarely explored for energy and production of new materials. Brazil products almost 10.5 million tons/year of rice husk and 26.8 million tons/year of banana stem. Thereby, the aim of this study was to analysis the potential of biomass residues for energy production and applications in new materials. Rice husk (specify the type) and banana stem (specify the type) were characterized by physicochemical analyses using the following parameters: organic carbon, nitrogen (NTK), proximate analyses, FT-IR spectroscopy, thermogravimetric analyses (TG), calorific values and silica content. Rice husk and banana stem presented attractive superior calorific (from 11.5 to 13.7MJ/kg), and they may be compared to vegetal coal (21.25 MJ/kg). These results are due to the high organic matter content. According to the proximate analysis, biomass has high carbon content (fixed and volatile) and low moisture and ash content. In addition, data obtained by Walkley–Black method point out that most of the carbon present in the rice husk (50.5 wt%) and in banana stalk (35.5 wt%) should be understood as organic carbon (readily oxidizable). Organic matter was also detected by Kjeldahl method which gives the values of nitrogen (especially on the organic form) for both residues: 3.8 and 4.7 g/kg of rice husk and banana stem respectively. TG and DSC analyses support the previous results, as they can provide information about the thermal stability of the samples allowing a correlation between thermal behavior and chemical composition. According to the thermogravimetric curves, there were two main stages of mass-losses. The first and smaller one occurred below 100 °C, which was suitable for water losses and the second event occurred between 200 and 500 °C which indicates decomposition of the organic matter. At this broad peak, the main loss was between 250-350 °C, and it is because of sugar decomposition (components readily oxidizable). Above 350 °C, mass loss of the biomass may be associated with lignin decomposition. Spectroscopic characterization just provided qualitative information about the organic matter, but spectra have shown absorption bands around 1030 cm-1 which may be identified as species containing silicon. This result is expected for the rice husk and deserves further investigation to the stalk of banana, as it can bring a different perspective for this biomass residue.Keywords: rice husk, banana stem, bioenergy, renewable feedstock
Procedia PDF Downloads 27988 Comparison and Validation of a dsDNA biomimetic Quality Control Reference for NGS based BRCA CNV analysis versus MLPA
Authors: A. Delimitsou, C. Gouedard, E. Konstanta, A. Koletis, S. Patera, E. Manou, K. Spaho, S. Murray
Abstract:
Background: There remains a lack of International Standard Control Reference materials for Next Generation Sequencing-based approaches or device calibration. We have designed and validated dsDNA biomimetic reference materials for targeted such approaches incorporating proprietary motifs (patent pending) for device/test calibration. They enable internal single-sample calibration, alleviating sample comparisons to pooled historical population-based data assembly or statistical modelling approaches. We have validated such an approach for BRCA Copy Number Variation analytics using iQRS™-CNVSUITE versus Mixed Ligation-dependent Probe Amplification. Methods: Standard BRCA Copy Number Variation analysis was compared between mixed ligation-dependent probe amplification and next generation sequencing using a cohort of 198 breast/ovarian cancer patients. Next generation sequencing based copy number variation analysis of samples spiked with iQRS™ dsDNA biomimetics were analysed using proprietary CNVSUITE software. Mixed ligation-dependent probe amplification analyses were performed on an ABI-3130 Sequencer and analysed with Coffalyser software. Results: Concordance of BRCA – copy number variation events for mixed ligation-dependent probe amplification and CNVSUITE indicated an overall sensitivity of 99.88% and specificity of 100% for iQRS™-CNVSUITE. The negative predictive value of iQRS-CNVSUITE™ for BRCA was 100%, allowing for accurate exclusion of any event. The positive predictive value was 99.88%, with no discrepancy between mixed ligation-dependent probe amplification and iQRS™-CNVSUITE. For device calibration purposes, precision was 100%, spiking of patient DNA demonstrated linearity to 1% (±2.5%) and range from 100 copies. Traditional training was supplemented by predefining the calibrator to sample cut-off (lock-down) for amplicon gain or loss based upon a relative ratio threshold, following training of iQRS™-CNVSUITE using spiked iQRS™ calibrator and control mocks. BRCA copy number variation analysis using iQRS™-CNVSUITE™ was successfully validated and ISO15189 accredited and now enters CE-IVD performance evaluation. Conclusions: The inclusion of a reference control competitor (iQRS™ dsDNA mimetic) to next generation sequencing-based sequencing offers a more robust sample-independent approach for the assessment of copy number variation events compared to mixed ligation-dependent probe amplification. The approach simplifies data analyses, improves independent sample data analyses, and allows for direct comparison to an internal reference control for sample-specific quantification. Our iQRS™ biomimetic reference materials allow for single sample copy number variation analytics and further decentralisation of diagnostics to single patient sample assessment.Keywords: validation, diagnostics, oncology, copy number variation, reference material, calibration
Procedia PDF Downloads 6687 A Sociological Study of the Potential Role of Retired Soldiers in the Post War Development and Reconstruction in Sri Lanka
Authors: Amunupura Kiriwandeiye Gedara, Asintha Saminda Gnanaratne
Abstract:
The security forces can be described as a workforce that goes beyond the role of ensuring the national security and contributes to the development process of the country. Soldiers are following combatant training courses during their tenure, they are equipped with a variety of vocational training courses to satisfy the needs of the army, to equip them with vocational training capabilities to achieve the development and reconstruction goals of the country as well as for the betterment of society in the event of emergencies. But with retirement, their relationship with the military is severed, and they are responsible for the future of their lives. The main purpose of this study was to examine how such professional capabilities can contribute to the development of the country, the current socio-economic status of the retired soldiers, and the current application of the vocational training skills they have mastered in the army to develop and rebuild the country in an effective manner. After analyzing the available research literature related to this field, a conceptual framework was developed and according to qualitative research methodology, and data obtained from Case studies and interviews are analyzed by using thematic analysis. Factors influencing early retirement include a lack of understanding of benefits, delays in promotions, not being properly evaluated for work, getting married on hasty decisions, and not having enough time to spend on family and household chores. Most of the soldiers are not aware about various programs and benefits available to retirees. They do not have a satisfactory attitude towards the retirement guidance they receive from the army at the time of retirement. Also, due to the lack of understanding about how to use their vocational capabilities successfully pursue their retirement life, the majority of people are employed in temporary jobs, and some are successful in post-retirement life due to their successful use of training received. Some live on pensions without engaging in any income-generating activities, and those who retire after 12 years of service are facing severe economic hardships as they do not get pensions. Although they have received training in various fields, they do not use them for their benefit due to lack of proper guidance. Although the government implements programs, they are not clearly aware of them. Barriers to utilization of training include an absence of a system to identify the professional skills of retired soldiers, interest in civil society affairs, exploration of opportunities in the civil and private sectors, and politicization of services. If they are given the opportunity, they will be able to contribute to the development and reconstruction process. The findings of the study further show that it has many social, economic, political, and psychological benefits not only for individuals but also for a country. Entrepreneurship training for all retired soldiers, improving officers' understanding, streamlining existing mechanisms, creating new mechanisms, setting up a separate unit for retirees, and adapting them to civil society, private and non-governmental contributions, and training courses can be identified as potential means to improve the current situation.Keywords: development, reconstruction, retired soldiers, vocational capabilities
Procedia PDF Downloads 13386 Development of a Risk Governance Index and Examination of Its Determinants: An Empirical Study in Indian Context
Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav
Abstract:
Risk management has been gaining extensive focus from international organizations like Committee of Sponsoring Organizations and Financial Stability Board, and, the foundation of such an effective and efficient risk management system lies in a strong risk governance structure. In view of this, an attempt (perhaps a first of its kind) has been made to develop a risk governance index, which could be used as proxy for quality of risk governance structures. The index (normative framework) is based on eleven variables, namely, size of board, board diversity in terms of gender, proportion of executive directors, executive/non-executive status of chairperson, proportion of independent directors, CEO duality, chief risk officer (CRO), risk management committee, mandatory committees, voluntary committees and existence/non-existence of whistle blower policy. These variables are scored on a scale of 1 to 5 with the exception of the variables, namely, status of chairperson and CEO duality (which have been scored on a dichotomous scale with the score of 3 or 5). In case there is a legal/statutory requirement in respect of above-mentioned variables and there is a non-compliance with such requirement a score of one has been envisaged. Though there is no legal requirement, for the larger part of study, in context of CRO, risk management committee and whistle blower policy, still a score of 1 has been assigned in the event of their non-existence. Recognizing the importance of these variables in context of risk governance structure and the fact that the study basically focuses on risk governance, the absence of these variables has been equated to non-compliance with a legal/statutory requirement. Therefore, based on this the minimum score is 15 and the maximum possible is 55. In addition, an attempt has been made to explore the determinants of this index. For this purpose, the sample consists of non-financial companies (429) that constitute S&P CNX500 index. The study covers a 10 years period from April 1, 2005 to March 31, 2015. Given the panel nature of data, Hausman test was applied, and it suggested that fixed effects regression would be appropriate. The results indicate that age and size of firms have significant positive impact on its risk governance structures. Further, post-recession period (2009-2015) has witnessed significant improvement in quality of governance structures. In contrast, profitability (positive relationship), leverage (negative relationship) and growth (negative relationship) do not have significant impact on quality of risk governance structures. The value of rho indicates that about 77.74% variation in risk governance structures is due to firm specific factors. Given the fact that each firm is unique in terms of its risk exposure, risk culture, risk appetite, and risk tolerance levels, it appears reasonable to assume that the specific conditions and circumstances that a company is beset with, could be the biggest determinants of its risk governance structures. Given the recommendations put forth in the paper (particularly for regulators and companies), the study is expected to be of immense utility in an important yet neglected aspect of risk management.Keywords: corporate governance, ERM, risk governance, risk management
Procedia PDF Downloads 25285 Examining the Effects of Ticket Bundling Strategies and Team Identification on Purchase of Hedonic and Utilitarian Options
Authors: Young Ik Suh, Tywan G. Martin
Abstract:
Bundling strategy is a common marketing practice today. In the past decades, both academicians and practitioners have increasingly emphasized the strategic importance of bundling in today’s markets. The reason for increased interest in bundling strategy is that they normally believe that it can significantly increase profits on organization’s sales over time and it is convenient for the customer. However, little efforts has been made on ticket bundling and purchase considerations in hedonic and utilitarian options in sport consumer behavior context. Consumers often face choices between utilitarian and hedonic alternatives in decision making. When consumers purchase certain products, they are only interested in the functional dimensions, which are called utilitarian dimensions. On the other hand, others focus more on hedonic features such as fun, excitement, and pleasure. Thus, the current research examines how utilitarian and hedonic consumption can vary in typical ticket purchasing process. The purpose of this research is to understand the following two research themes: (1) the differential effect of discount framing on ticket bundling: utilitarian and hedonic options and (2) moderating effect of team identification on ticket bundling. In order to test the research hypotheses, an experimental study using a two-way ANOVA, 3 (team identification: low, medium, and high) X 2 (discount frame: ticket bundle sales with utilitarian product, and hedonic product), with mixed factorial design will be conducted to determine whether there is a statistical significance between purchasing intentions of two discount frames of ticket bundle sales within different team identification levels. To compare mean differences among the two different settings, we will create two conditions of ticket bundles: (1) offering a discount on a ticket ($5 off) if they would purchase it along with utilitarian product (e.g., iPhone8 case, t-shirt, cap), and (2) offering a discount on a ticket ($5 off) if they would purchase it along with hedonic product (e.g., pizza, drink, fans featured on big screen). The findings of the current ticket bundling study are expected to have many theoretical and practical contributions and implications by extending the research and literature pertaining to the relationship between team identification and sport consumer behavior. Specifically, this study can provide a reliable and valid framework to understanding the role of team identification as a moderator on behavioral intentions such as purchase intentions. From an academic perspective, the study will be the first known attempt to understand consumer reactions toward different discount frames related to ticket bundling. Even though the game ticket itself is the major commodity of sport event attendance and significantly related to teams’ revenue streams, most recent ticket pricing research has been done in terms of economic or cost-oriented pricing and not from a consumer psychological perspective. For sport practitioners, this study will also provide significant implications. The result will imply that sport marketers may need to develop two different ticketing promotions for loyal fan and non-loyal fans. Since loyal fans concern ticket price than tie-in products when they see ticket bundle sales, advertising campaign should be more focused on discounting ticket price.Keywords: ticket bundling, hedonic, utilitarian, team identification
Procedia PDF Downloads 16684 Learning Trajectories of Mexican Language Teachers: A Cross-Cultural Comparative Study
Authors: Alberto Mora-Vazquez, Nelly Paulina Trejo Guzmán
Abstract:
This study examines the learning trajectories of twelve language teachers who were former students of a BA in applied linguistics at a Mexican state university. In particular, the study compares the social, academic and professional trajectories of two groups of teachers, six locally raised and educated ones and six repatriated ones from the U.S. Our interest in undertaking this research lies in the wide variety of students’ backgrounds we as professors in the BA program have witnessed throughout the years it has been around. Ever since the academic program started back in 2006, the student population has been made up of students whose backgrounds are highly diverse in terms of English language proficiency level, professional orientations and degree of cross-cultural awareness. Such diversity is further evidenced by the ongoing incorporation of some transnational students who have lived and studied in the United States for a significant period of time before their enrolment in the BA program. This, however, is not an isolated event as other researchers have reported this phenomenon in other TESOL-related programs of Mexican universities in the literature. Therefore, this suggests that their social and educational experiences are quite different from those of their Mexican born and educated counterparts. In addition, an informal comparison of the participation in formal teaching activities of the two groups at the beginning of their careers also suggested that significant differences in teacher training and development needs could also be identified. This issue raised questions about the need to examine the life and learning trajectories of these two groups of student teachers so as to develop an intervention plan aimed at supporting and encouraging their academic and professional advancement based on their particular needs. To achieve this goal, the study makes use of a combination of retrospective life-history research and the analysis of academic documents. The first approach uses interviews for data-collection. Through the use of a narrative life-history interview protocol, teachers were asked about their childhood home context, their language learning and teaching experiences, their stories of studying applied linguistics, and self-description. For the analysis of participants’ educational outcomes, a wide range of academic records, including reports of language proficiency exams results and language teacher training certificates, were used. The analysis revealed marked differences between the two groups of teachers in terms of academic and professional orientations. The locally educated teachers tended to graduate first, to look for further educational opportunities after graduation, to enter the language teaching profession earlier, and to expand their professional development options more than their peers. It is argued that these differences can be explained by their identities, which are made up of the interplay of influences such as their home context, their previous educational experiences and their cultural background. Implications for language teacher trainers and applied linguistics academic program administrators are provided.Keywords: beginning language teachers, life-history research, Mexican context, transnational students
Procedia PDF Downloads 41983 Quality Characteristics of Road Runoff in Coastal Zones: A Case Study in A25 Highway, Portugal
Authors: Pedro B. Antunes, Paulo J. Ramísio
Abstract:
Road runoff is a linear source of diffuse pollution that can cause significant environmental impacts. During rainfall events, pollutants from both stationary and mobile sources, which have accumulated on the road surface, are dragged through the superficial runoff. Road runoff in coastal zones may present high levels of salinity and chlorides due to the proximity of the sea and transported marine aerosols. Appearing to be correlated to this process, organic matter concentration may also be significant. This study assesses this phenomenon with the purpose of identifying the relationships between monitored water quality parameters and intrinsic site variables. To achieve this objective, an extensive monitoring program was conducted on a Portuguese coastal highway. The study included thirty rainfall events, in different weather, traffic and salt deposition conditions in a three years period. The evaluations of various water quality parameters were carried out in over 200 samples. In addition, the meteorological, hydrological and traffic parameters were continuously measured. The salt deposition rates (SDR) were determined by means of a wet candle device, which is an innovative feature of the monitoring program. The SDR, variable throughout the year, appears to show a high correlation with wind speed and direction, but mostly with wave propagation, so that it is lower in the summer, in spite of the favorable wind direction in the case study. The distance to the sea, topography, ground obstacles and the platform altitude seems to be also relevant. It was confirmed the high salinity in the runoff, increasing the concentration of the water quality parameters analyzed, with significant amounts of seawater features. In order to estimate the correlations and patterns of different water quality parameters and variables related to weather, road section and salt deposition, the study included exploratory data analysis using different techniques (e.g. Pearson correlation coefficients, Cluster Analysis and Principal Component Analysis), confirming some specific features of the investigated road runoff. Significant correlations among pollutants were observed. Organic matter was highlighted as very dependent of salinity. Indeed, data analysis showed that some important water quality parameters could be divided into two major clusters based on their correlations to salinity (including organic matter associated parameters) and total suspended solids (including some heavy metals). Furthermore, the concentrations of the most relevant pollutants seemed to be very dependent on some meteorological variables, particularly the duration of the antecedent dry period prior to each rainfall event and the average wind speed. Based on the results of a monitoring case study, in a coastal zone, it was proven that SDR, associated with the hydrological characteristics of road runoff, can contribute for a better knowledge of the runoff characteristics, and help to estimate the specific nature of the runoff and related water quality parameters.Keywords: coastal zones, monitoring, road runoff pollution, salt deposition
Procedia PDF Downloads 23982 Characterization and Evaluation of the Dissolution Increase of Molecular Solid Dispersions of Efavirenz
Authors: Leslie Raphael de M. Ferraz, Salvana Priscylla M. Costa, Tarcyla de A. Gomes, Giovanna Christinne R. M. Schver, Cristóvão R. da Silva, Magaly Andreza M. de Lyra, Danilo Augusto F. Fontes, Larissa A. Rolim, Amanda Carla Q. M. Vieira, Miracy M. de Albuquerque, Pedro J. Rolim-Neto
Abstract:
Efavirenz (EFV) is a drug used as first-line treatment of AIDS. However, it has poor aqueous solubility and wettability, presenting problems in the gastrointestinal tract absorption and bioavailability. One of the most promising strategies to improve the solubility is the use of solid dispersions (SD). Therefore, this study aimed to characterize SD EFZ with the polymers: PVP-K30, PVPVA 64 and SOLUPLUS in order to find an optimal formulation to compose a future pharmaceutical product for AIDS therapy. Initially, Physical Mixtures (PM) and SD with the polymers were obtained containing 10, 20, 50 and 80% of drug (w/w) by the solvent method. The best formulation obtained between the SD was selected by in vitro dissolution test. Finally, the drug-carrier system chosen, in all ratios obtained, were analyzed by the following techniques: Differential Scanning Calorimetry (DSC), polarization microscopy, Scanning Electron Microscopy (SEM) and spectrophotometry of absorption in the region of infrared (IR). From the dissolution profiles of EFV, PM and SD, the values of area Under The Curve (AUC) were calculated. The data showed that the AUC of all PM is greater than the isolated EFV, this result is derived from the hydrophilic properties of the polymers thus favoring a decrease in surface tension between the drug and the dissolution medium. In adittion, this ensures an increasing of wettability of the drug. In parallel, it was found that SD whom had higher AUC values, were those who have the greatest amount of polymer (with only 10% drug). As the amount of drug increases, it was noticed that these results either decrease or are statistically similar. The AUC values of the SD using the three different polymers, followed this decreasing order: SD PVPVA 64-EFV 10% > SD PVP-K30-EFV 10% > SD Soluplus®-EFV 10%. The DSC curves of SD’s did not show the characteristic endothermic event of drug melt process, suggesting that the EFV was converted to its amorphous state. The analysis of polarized light microscopy showed significant birefringence of the PM’s, but this was not observed in films of SD’s, thus suggesting the conversion of the drug from the crystalline to the amorphous state. In electron micrographs of all PM, independently of the percentage of the drug, the crystal structure of EFV was clearly detectable. Moreover, electron micrographs of the SD with the two polymers in different ratios investigated, we observed the presence of particles with irregular size and morphology, also occurring an extensive change in the appearance of the polymer, not being possible to differentiate the two components. IR spectra of PM corresponds to the overlapping of polymer and EFV bands indicating thereby that there is no interaction between them, unlike the spectra of all SD that showed complete disappearance of the band related to the axial deformation of the NH group of EFV. Therefore, this study was able to obtain a suitable formulation to overcome the solubility limitations of the EFV, since SD PVPVA 64-EFZ 10% was chosen as the best system in delay crystallization of the prototype, reaching higher levels of super saturation.Keywords: characterization, dissolution, Efavirenz, solid dispersions
Procedia PDF Downloads 63181 Gold Nanoprobes Assay for the Identification of Foodborn Pathogens Such as Staphylococcus aureus, Listeria monocytogenes and Salmonella enteritis
Authors: D. P. Houhoula, J. Papaparaskevas, S. Konteles, A. Dargenta, A. Farka, C. Spyrou, M. Ziaka, S. Koussisis, E. Charvalos
Abstract:
Objectives: Nanotechnology is providing revolutionary opportunities for the rapid and simple diagnosis of many infectious diseases. Staphylococcus aureus, Listeria monocytogenes and Salmonella enteritis are important human pathogens. Diagnostic assays for bacterial culture and identification are time consuming and laborious. There is an urgent need to develop rapid, sensitive, and inexpensive diagnostic tests. In this study, a gold nanoprobe strategy developed and relies on the colorimetric differentiation of specific DNA sequences based approach on differential aggregation profiles in the presence or absence of specific target hybridization. Method: Gold nanoparticles (AuNPs) were purchased from Nanopartz. They were conjugated with thiolated oligonucleotides specific for the femA gene for the identification of members of Staphylococcus aureus, the mecA gene for the differentiation of Staphylococcus aureus and MRSA Staphylococcus aureus, hly gene encoding the pore-forming cytolysin listeriolysin for the identification of Listeria monocytogenes and the invA sequence for the identification of Salmonella enteritis. DNA isolation from Staphylococcus aureus Listeria monocytogenes and Salmonella enteritis cultures was performed using the commercial kit Nucleospin Tissue (Macherey Nagel). Specifically 20μl of DNA was diluted in 10mMPBS (pH5). After the denaturation of 10min, 20μl of AuNPs was added followed by the annealing step at 58oC. The presence of a complementary target prevents aggregation with the addition of acid and the solution remains pink, whereas in the opposite event it turns to purple. The color could be detected visually and it was confirmed with an absorption spectrum. Results: Specifically, 0.123 μg/μl DNA of St. aureus, L.monocytogenes and Salmonella enteritis was serially diluted from 1:10 to 1:100. Blanks containing PBS buffer instead of DNA were used. The application of the proposed method on isolated bacteria produced positive results with all the species of St. aureus and L. monocytogenes and Salmonella enteritis using the femA, mecA, hly and invA genes respectively. The minimum detection limit of the assay was defined at 0.2 ng/μL of DNA. Below of 0.2 ng/μL of bacterial DNA the solution turned purple after addition of HCl, defining the minimum detection limit of the assay. None of the blank samples was positive. The specificity was 100%. The application of the proposed method produced exactly the same results every time (n = 4) the evaluation was repeated (100% repeatability) using the femA, hly and invA genes. Using the gene mecA for the differentiation of Staphylococcus aureus and MRSA Staphylococcus aureus the method had a repeatability 50%. Conclusion: The proposed method could be used as a highly specific and sensitive screening tool for the detection and differentiation of Staphylococcus aureus Listeria monocytogenes and Salmonella enteritis. The use AuNPs for the colorimetric detection of DNA targets represents an inexpensive and easy-to-perform alternative to common molecular assays. The technology described here, may develop into a platform that could accommodate detection of many bacterial species.Keywords: gold nanoparticles, pathogens, nanotechnology, bacteria
Procedia PDF Downloads 34180 Right Atrial Tissue Morphology in Acquired Heart Diseases
Authors: Edite Kulmane, Mara Pilmane, Romans Lacis
Abstract:
Introduction: Acquired heart diseases remain one of the leading health care problems in the world. Changes in myocardium of the diseased hearts are complex and pathogenesis is still not fully clear. The aim of this study was to identify appearance and distribution of apoptosis, homeostasis regulating factors, and innervation and ischemia markers in right atrial tissue in different acquired heart diseases. Methods: During elective open heart surgery were taken right atrial tissue fragments from 12 patients. All patients were operated because of acquired heart diseases- aortic valve stenosis (5 patients), coronary heart disease (5 patients), coronary heart disease and secondary mitral insufficiency (1 patient) and mitral disease (1 patient). The mean age was (mean±SD) 70,2±7,0 years (range 58-83 years). The tissues were stained with haematoxylin and eosin methods for routine light-microscopical examination and for immunohistochemical detection of protein gene peptide 9.5 (PGP 9.5), human atrial natriuretic peptide (hANUP), vascular endothelial growth factor (VEGF), chromogranin A and endothelin. Apoptosis was detected by TUNEL method. Results: All specimens showed degeneration of cardiomyocytes with lysis of myofibrils, diffuse vacuolization especially in perinuclear region, different size of cells and their nuclei. The severe invasion of connective tissue was observed in main part of all fragments. The apoptotic index ranged from 24 to 91%. One specimen showed region of newly performed microvessels with cube shaped endotheliocytes that were positive for PGP 9.5, endothelin, chromogranin A and VEGF. From all fragments, taken from patients with coronary heart disease, there were observed numerous PGP 9.5-containing nerve fibres, except in patient with secondary mitral insufficiency, who showed just few PGP 9.5 positive nerves. In majority of specimens there were regions observed with cube shaped mixed -VEGF immunoreactive endocardial and epicardial cells. Only VEGF positive endothelial cells were observed just in few specimens. There was no significant difference of hANUP secreting cells among all specimens. All patients operated due to the coronary heart disease moderate to numerous number of chromogranin A positive cells were seen while in patients with aortic valve stenosis tissue demonstrated just few factor positive cells. Conclusions: Complex detection of different factors may indicate selectively disordered morphopathogenetical event of heart disease: decrease of PGP 9.5 nerves suggests the decreased innervation of organ; increased apoptosis indicates the cell death without ingrowth of connective tissue; persistent presence of hANUP proves the unchanged homeostasis of cardiomyocytes probably supported by expression of chromogranins. Finally, decrease of VEGF detects the regions of affected blood vessels in heart affected by acquired heart disease.Keywords: heart, apoptosis, protein-gene peptide 9.5, atrial natriuretic peptide, vascular endothelial growth factor, chromogranin A, endothelin
Procedia PDF Downloads 29579 Monsoon Controlled Mercury Transportation in Ganga Alluvial Plain, Northern India and Its Implication on Global Mercury Cycle
Authors: Anjali Singh, Ashwani Raju, Vandana Devi, Mohmad Mohsin Atique, Satyendra Singh, Munendra Singh
Abstract:
India is the biggest consumer of mercury and, consequently, a major emitter too. The increasing mercury contamination in India’s water resources has gained widespread attention and, therefore, atmospheric deposition is of critical concern. However, little emphasis was placed on the role of precipitation in the aquatic mercury cycle of the Ganga Alluvial Plain which provides drinking water to nearly 7% of the world’s human population. A majority of the precipitation here occurs primarily in 10% duration of the year in the monsoon season. To evaluate the sources and transportation of mercury, water sample analysis has been conducted from two selected sites near Lucknow, which have a strong hydraulic gradient towards the river. 31 groundwater samples from Jehta village (26°55’15’’N; 80°50’21’’E; 119 m above mean sea level) and 31 river water samples from the Behta Nadi (a tributary of the Gomati River draining into the Ganga River) were collected during the monsoon season on every alternate day between 01 July to 30 August 2019. The total mercury analysis was performed by using Flow Injection Atomic Absorption Spectroscopy (AAS)-Mercury Hybride System, and daily rainfall data was collected from the India Meteorological Department, Amausi, Lucknow. The ambient groundwater and river-water concentrations were both 2-4 ng/L as there is no known geogenic source of mercury found in the area. Before the onset of the monsoon season, the groundwater and the river-water recorded mercury concentrations two orders of magnitude higher than the ambient concentrations, indicating the regional transportation of the mercury from the non-point source into the aquatic environment. Maximum mercury concentrations in groundwater and river-water were three orders of magnitude higher than the ambient concentrations after the onset of the monsoon season characterizing the considerable mobilization and redistribution of mercury by monsoonal precipitation. About 50% of both of the water samples were reported mercury below the detection limit, which can be mostly linked to the low intensity of precipitation in August and also with the dilution factor by precipitation. The highest concentration ( > 1200 ng/L) of mercury in groundwater was reported after 6-days lag from the first precipitation peak. Two high concentration peaks (>1000 ng/L) in river-water were separately correlated with the surface flow and groundwater outflow of mercury. We attribute the elevated mercury concentration in both of the water samples before the precipitation event to mercury originating from the extensive use of agrochemicals in mango farming in the plain. However, the elevated mercury concentration during the onset of monsoon appears to increase in area wetted with atmospherically deposited mercury, which migrated down from surface water to groundwater as downslope migration is a fundamental mechanism seen in rivers of the alluvial plain. The present study underscores the significance of monsoonal precipitation in the transportation of mercury to drinking water resources of the Ganga Alluvial Plain. This study also suggests that future research must be pursued for a better understand of the human health impact of mercury contamination and for quantification of the role of Ganga Alluvial Plain in the Global Mercury Cycle.Keywords: drinking water resources, Ganga alluvial plain, india, mercury
Procedia PDF Downloads 14578 Culvert Blockage Evaluation Using Australian Rainfall And Runoff 2019
Authors: Rob Leslie, Taher Karimian
Abstract:
The blockage of cross drainage structures is a risk that needs to be understood and managed or lessened through the design. A blockage is a random event, influenced by site-specific factors, which needs to be quantified for design. Under and overestimation of blockage can have major impacts on flood risk and cost associated with drainage structures. The importance of this matter is heightened for those projects located within sensitive lands. It is a particularly complex problem for large linear infrastructure projects (e.g., rail corridors) located within floodplains where blockage factors can influence flooding upstream and downstream of the infrastructure. The selection of the appropriate blockage factors for hydraulic modeling has been subject to extensive research by hydraulic engineers. This paper has been prepared to review the current Australian Rainfall and Runoff 2019 (ARR 2019) methodology for blockage assessment by applying this method to a transport corridor brownfield upgrade case study in New South Wales. The results of applying the method are also validated against asset data and maintenance records. ARR 2019 – Book 6, Chapter 6 includes advice and an approach for estimating the blockage of bridges and culverts. This paper concentrates specifically on the blockage of cross drainage structures. The method has been developed to estimate the blockage level for culverts affected by sediment or debris due to flooding. The objective of the approach is to evaluate a numerical blockage factor that can be utilized in a hydraulic assessment of cross drainage structures. The project included an assessment of over 200 cross drainage structures. In order to estimate a blockage factor for use in the hydraulic model, a process has been advanced that considers the qualitative factors (e.g., Debris type, debris availability) and site-specific hydraulic factors that influence blockage. A site rating associated with the debris potential (i.e., availability, transportability, mobility) at each crossing was completed using the method outlined in ARR 2019 guidelines. The hydraulic results inputs (i.e., flow velocity, flow depth) and qualitative factors at each crossing were developed into an advanced spreadsheet where the design blockage level for cross drainage structures were determined based on the condition relating Inlet Clear Width and L10 (average length of the longest 10% of the debris reaching the site) and the Adjusted Debris Potential. Asset data, including site photos and maintenance records, were then reviewed and compared with the blockage assessment to check the validity of the results. The results of this assessment demonstrate that the estimated blockage factors at each crossing location using ARR 2019 guidelines are well-validated with the asset data. The primary finding of the study is that the ARR 2019 methodology is a suitable approach for culvert blockage assessment that has been validated against a case study spanning a large geographical area and multiple sub-catchments. The study also found that the methodology can be effectively coded within a spreadsheet or similar analytical tool to automate its application.Keywords: ARR 2019, blockage, culverts, methodology
Procedia PDF Downloads 36177 Archaeoseismological Evidence for a Possible Destructive Earthquake in the 7th Century AD at the Ancient Sites of Bulla Regia and Chemtou (NW Tunisia): Seismotectonic and Structural Implications
Authors: Abdelkader Soumaya, Noureddine Ben Ayed, Ali Kadri, Said Maouche, Hayet Khayati Ammar, Ahmed Braham
Abstract:
The historic sites of Bulla Regia and Chemtou are among the most important archaeological monuments in northwestern Tunisia, which flourished as large, wealthy settlements during the Roman and Byzantine periods (2nd to 7th centuries AD). An archaeoseismological study provides the first indications about the impact of a possible ancient strong earthquake in the destruction of these cities. Based on previous archaeological excavation results, including numismatic evidence, pottery, economic meltdown and urban transformation, the abrupt ruin and destruction of the cities of Bulla Regia and Chemtou can be bracketed between 613 and 647 AD. In this study, we carried out the first attempt to use the analysis of earthquake archaeological effects (EAEs) that were observed during our field investigations in these two historic cities. The damage includes different types of EAEs: folds on regular pavements, displaced and deformed vaults, folded walls, tilted walls, collapsed keystones in arches, dipping broken corners, displaced-fallen columns, block extrusions in walls, penetrative fractures in brick-made walls and open fractures on regular pavements. These deformations are spread over 10 different sectors or buildings and include 56 measured EAEs. The structural analysis of the identified EAEs can indicate an ancient destructive earthquake that probably destroyed the Bulla Regia and Chemtou archaeological sites. We then analyzed these measurements using structural geological analysis to obtain the maximum horizontal strain of the ground (e.g., S ₕₘₐₓ) on each building-oriented damage. After the collection and analysis of these strain datasets, we proceed to plot the orientation of Sₕₘₐₓ trajectories on the map of the archaeological site (Bulla Regia). We concluded that the obtained Sₕₘₐₓ trajectories within this site could then be related to the mean direction of ground motion (oscillatory movement of the ground) triggered by a seismic event, as documented for some historical earthquakes across the world. These Sₕₘₐₓ orientations closely match the current active stress field, as highlighted by some instrumental events in northern Tunisia. In terms of the seismic source, we strongly suggest that the reactivation of a neotectonic strike-slip fault trending N50E must be responsible for this probable historic earthquake and the recent instrumental seismicity in this area. This fault segment, affecting the folded quaternary deposits south of Jebel Rebia, passes through the monument of Bulla Regia. Stress inversion of the observed and measured data along this fault shows an N150 - 160 trend of Sₕₘₐₓ under a transpressional tectonic regime, which is quite consistent with the GPS data and the state of the current stress field in this region.Keywords: NW Tunisia, archaeoseismology, earthquake archaeological effect, bulla regia - Chemtou, seismotectonic, neotectonic fault
Procedia PDF Downloads 4976 Legislating for Public Participation and Environmental Justice: Whether It Solves or Prevent Disputes
Authors: Deborah A. Hollingworth
Abstract:
The key tenets associated with ‘environmental justice’, were first articulated in a global context in Principle 10 of the United Nations Declaration on Environment and Development at Rio de Janeiro in 1992 (the Rio Declaration). The elements can be conflated to require: public participation in decision-making; the provision of relevant information to those affected about environmental hazards issues; access to judicial and administrative proceeding; and the opportunity for redress where remedy where required. This paper examines the legislative and regulatory arrangements in place for the implementation these elements in a number of industrialised democracies, including Australia. Most have, over time made regulatory provision for these elements – even if they are not directly attributed Principle 10 or the notion of environmental justice. The paper proposes, that of these elements the most critical to the achievement of good environmental governance, is a legislated recognition and role of public participation. However, the paper considers that notwithstanding sound legislative and regulatory practices, environmental regulators frequently struggle, where there is a complex decision-making scenario or long-standing enmity between a community and industry to achieve effective engagement with the public. This study considers the dilemma confronted by environmental regulators to given meaningful effect to the principles enshrined in Principle 10 – that even when the legislative expression of Principle 10 is adhered to – does not prevent adverse outcomes. In particular, it considers, as a case study a prominent environmental incident in 2014 in Australia in which an open-cut coalmine located in the regional township of Morwell caught fire during bushfire season. The fire, which took 45 days to be extinguished had a significant and adverse impact on the community in question, but compounded a complex, and sometime antagonistic history between the mine and township. The case study exemplifies the complex factors that will often be present between industry, the public and regulatory bodies, and which confound the concept of environmental justice, and the elements of enshrined in the Principle 10 of the Rio Declaration. The study proposes that such tensions and complex examples will commonly be the reality of communities and regulators. However, to give practical effect to outcomes contemplated by Principle 10, the paper considers that regulators will may consider public intervention more broadly as including early interventions and formal opportunities for “conferencing” between industry, community and regulators. These initiatives help to develop a shared understanding and identification of issues. It is proposed that although important, options for “alternative dispute resolution” are not sufficiently preventative, as they come into play when a dispute has arise. Similarly “restorative justice” programs, while important once an incident or adverse environmental outcome has occurred, are post event and therefore necessarily limited. The paper considers the examples of how public participation at the outset – at the time of a proposal, before issues arise or eventuate to ensure, is demonstrably the most effective way for building commonality and an agreed methodology for working to resolve issues once they occur.Keywords: environmental justice, alternative dispute resolution, domestic environmental law, international environmental law
Procedia PDF Downloads 30975 The Bidirectional Effect between Parental Burnout and the Child’s Internalized and/or Externalized Behaviors
Authors: Aline Woine, Moïra Mikolajczak, Virginie Dardier, Isabelle Roskam
Abstract:
Background information: Becoming a parent is said to be the happiest event one can ever experience in one’s life. This popular (and almost absolute) truth–which no reasonable and decent human being would ever dare question on pain of being singled out as a bad parent–contrasts with the nuances that reality offers. Indeed, while many parents do thrive in their parenting role, some others falter and become progressively overwhelmed by their parenting role, ineluctably caught in a spiral of exhaustion. Parental burnout (henceforth PB) sets in when parental demands (stressors) exceed parental resources. While it is now generally acknowledged that PB affects the parent’s behavior in terms of neglect and violence toward their offspring, little is known about the impact that the syndrome might have on the children’s internalized (anxious and depressive symptoms, somatic complaints, etc.) and/or externalized (irritability, violence, aggressiveness, conduct disorder, oppositional disorder, etc.) behaviors. Furthermore, at the time of writing, to our best knowledge, no research has yet tested the reverse effect, namely, that of the child's internalized and/or externalized behaviors on the onset and/or maintenance of parental burnout symptoms. Goals and hypotheses: The present pioneering research proposes to fill an important gap in the existing literature related to PB by investigating the bidirectional effect between PB and the child’s internalized and/or externalized behaviors. Relying on a cross-lagged longitudinal study with three waves of data collection (4 months apart), our study tests a transactional model with bidirectional and recursive relations between observed variables and at the three waves, as well as autoregressive paths and cross-sectional correlations. Methods: As we write this, wave-two data are being collected via Qualtrics, and we expect a final sample of about 600 participants composed of French-speaking (snowball sample) and English-speaking (Prolific sample) parents. Structural equation modeling is employed using Stata version 17. In order to retain as much statistical power as possible, we use all available data and therefore apply the maximum likelihood with a missing value (mlmv) as the method of estimation to compute the parameter estimates. To limit (in so far is possible) the shared method variance bias in the evaluation of the child’s behavior, the study relies on a multi-informant evaluation approach. Expected results: We expect our three-wave longitudinal study to show that PB symptoms (measured at T1) raise the occurrence/intensity of the child’s externalized and/or internalized behaviors (measured at T2 and T3). We further expect the child’s occurrence/intensity of externalized and/or internalized behaviors (measured at T1) to augment the risk for PB (measured at T2 and T3). Conclusion: Should our hypotheses be confirmed, our results will make an important contribution to the understanding of both PB and children’s behavioral issues, thereby opening interesting theoretical and clinical avenues.Keywords: exhaustion, structural equation modeling, cross-lagged longitudinal study, violence and neglect, child-parent relationship
Procedia PDF Downloads 7374 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 12873 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality
Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo
Abstract:
Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.Keywords: linear model, models and modeling, probability, randomness, sample
Procedia PDF Downloads 11872 Convergence of Strategic Tasks of Business Tourism and Hotel Industry Development: The Case of Georgia
Authors: Nana Katsitadze, Tamar Atanelishvili, Mariam Kutateladze, Alexandre Tushishvili
Abstract:
In the modern world, tourism has emerged as one of the most powerful economic sectors, and due to its high economic performance, it is attractive to the countries with various levels of economic development. The purpose of the present paper, dedicated to discussing the current problems of tourism development, is to find ways which will contribute to bringing more benefits to the country from the sector. Georgia has been successfully developing leisure tourism for the last ten years, and at the next stage of development business, tourism gains particular importance for Georgia as a means of mitigating the negative socio-economic effects caused by the seasonality of tourism and as a high-cost tourism market. Therefore, the object of the paper is to study the factors that contribute to the development of business tourism. The paper uses the research methods such as system analysis, synthesis, analogy, as well as historical, comparative, economic, and statistical methods of analysis. The information base for the research is made up of the statistics on the functioning of the tourism market of Georgia and foreign countries as well as official data provided by international organizations in the field of tourism. Based on the experience of business tourism around the world and identifying the successful start of business tourism development in Georgia and its causing factors, a business tourism development model for Georgia has been developed. The model might be useful as a methodological material for developing a business tourism development concept for the countries with limited financial resources but rich in tourism resources like Georgia. On the initial stage of development (in absence of conventional centers), the suggested concept of business tourism development involves organizing small and medium-sized meetings both in large cities and in regions by using high-class hotel infrastructure and event management services. Relocation of small meetings to the regions encourages inclusive development of the sector based on increasing the awareness of these regions as tourist sites as well as the increase in employment and sales of other tourism or consumer products. Business tourism increases the number of hotel visitors in the non-seasonal period and improves hotel performance indicators, which enhances the attractiveness of investing in the hotel business. According to the present concept of business tourism development, at the initial stage, development of business tourism is based on the existing markets, including internal market, neighboring markets and the markets of geographically relatively near countries and at the next stage, the concept involves generating tourists from other relatively distant target markets. As a result, by gaining experience in business tourism, enhancing professionalism, increasing awareness and stimulating infrastructure development, the country will prepare the basis to move to a higher stage of tourism development. In addition, the experience showed that for attracting large customers, peculiarities of the field require activation of state policy and active use of marketing mechanisms and tools of the state.Keywords: hotel industry development, MICE model, MICE strategy, MICE tourism in Georgia
Procedia PDF Downloads 155