Search results for: location detection
444 The Antioxidant Activity of Grape Chkhaveri and Its Wine Cultivated in West Georgia (Adjaria)
Authors: Maia Kharadze, Indira Djaparidze, Maia Vanidze, Aleko Kalandia
Abstract:
Modern scientific world studies chemical components and antioxidant activity of different kinds of vines according to their breed purity and location. To our knowledge, this kind of research has not been conducted in Georgia yet. The object of our research was to study Chkhaveri vine, which is included in the oldest varieties of the Black Sea basin vine. We have studied different-altitude Chkaveri grapes, juice, and wine (half dry rose-colored produced with European technologies) and their technical markers, qualitative and quantitive composition of their biologically active compounds and their antioxidant activity. We were determining the amount of phenols using Folin-Ciocalteu reagent, Flavonoids, Catechins and Anthocyanins using Spectral method and antioxidant activity using DPPH method. Several compounds were identified using –HPLC-UV-Vis, UPLC-MS methods. Six samples of Chkhaveri species– 5, 300, 360, 380, 400, 780 meter altitudes were taken and analyzed. The sample taken from 360 m altitude is distinguished by its cluster mass (383.6 grams) and high amount of sugar (20.1%). The sample taken from the five-meter altitude is distinguished by having high acidity (0.95%). Unlike other grapes varieties, such concentration of sugar and relatively low levels of citric acid ultimately leads to Chkhaveri wine individuality. Biologically active compounds of Chkhaveri were researched in 2014, 2015, 2016. The amount of total phenols in samples of 2016 fruit varies from 976.7 to 1767.0 mg/kg. Amount of Anthocians is 721.2-1630.2 mg/kg, and the amount of Flavanoids varies from 300.6 to 825.5 mg/kg. Relatively high amount of anthocyanins was found in the Chkhaveri at 780-meter altitude - 1630.2 mg/kg. Accordingly, the amount of Phenols and Flavanoids is high- 1767.9 mg/kg and 825.5 mg/kg. These characteristics are low in samples gathered from 5 meters above sea level, Anthocyanins-721.2 mg/ kg, total Phenols-976.7 mg/ kg, and Flavanoids-300.6 mg/kg. The highest amount of bioactive compounds can be found in the Chkhaveri samples of high altitudes because with rising height environment becomes harsh, the plant has to develop a better immune system using Phenolic compounds. The technology that is used for the production of wine also plays a huge role in the composition of the final product. Optimal techniques of maceration and ageing were worked out. While squeezing Chkhaveri, there are no anthocyanins in the juice. However, the amount of Anthocyanins rises during maceration. After the fermentation of dregs, the amount of anthocyanins is 55%, 521.3 gm/l, total Phenols 80% 1057.7 mg/l and Flavanoids 23.5 mg/l. Antioxidant activity of samples was also determined using the effect of 50% inhibition of the samples. All samples have high antioxidant activity. For instance, in samples at 780 meters above the sea-level antioxidant activity was 53.5%. It is relatively high compared to the sample at 5 m above sea-level with the antioxidant activity of 30.5%. Thus, there is a correlation between the amount Anthocyanins and antioxidant activity. The designated project has been fulfilled by financial support of the Georgia National Science Foundation (Grant AP/96/13, Grant 216816), Any idea in this publication is possessed by the author and may not represent the opinion of the Georgia National Science Foundation.Keywords: antioxidants, bioactive content, wine, chkhaveri
Procedia PDF Downloads 230443 Cytochrome B Diversity and Phylogeny of Egyptian Sheep Breeds
Authors: Othman E. Othman, Agnés Germot, Daniel Petit, Abderrahman Maftah
Abstract:
Threats to the biodiversity are increasing due to the loss of genetic diversity within the species utilized in agriculture. Due to the progressive substitution of the less productive, locally adapted and native breeds by highly productive breeds, the number of threatened breeds is increased. In these conditions, it is more strategically important than ever to preserve as much the farm animal diversity as possible, to ensure a prompt and proper response to the needs of future generations. Mitochondrial (mtDNA) sequencing has been used to explain the origins of many modern domestic livestock species. Studies based on sequencing of sheep mitochondrial DNA showed that there are five maternal lineages in the world for domestic sheep breeds; A, B, C, D and E. Because of the eastern location of Egypt in the Mediterranean basin and the presence of fat-tailed sheep breeds- character quite common in Turkey and Syria- where genotypes that seem quite primitive, the phylogenetic studies of Egyptian sheep breeds become particularly attractive. We aimed in this work to clarify the genetic affinities, biodiversity and phylogeny of five Egyptian sheep breeds using cytochrome B sequencing. Blood samples were collected from 63 animals belonging to the five tested breeds; Barki, Rahmani, Ossimi, Saidi and Sohagi. The total DNA was extracted and the specific primer allowed the conventional PCR amplification of the cytochrome B region of mtDNA (approximately 1272 bp). PCR amplified products were purified and sequenced. The alignment of Sixty-three samples was done using BioEdit software. DnaSP 5.00 software was used to identify the sequence variation and polymorphic sites in the aligned sequences. The result showed that the presence of 34 polymorphic sites leading to the formation of 18 haplotypes. The haplotype diversity in five tested breeds ranged from 0.676 in Rahmani breed to 0.894 in Sohagi breed. The genetic distances (D) and the average number of pairwise differences (Dxy) between breeds were estimated. The lowest distance was observed between Rahmani and Saidi (D: 1.674 and Dxy: 0.00150) while the highest distance was observed between Ossimi and Sohagi (D: 5.233 and Dxy: 0.00475). Neighbour-joining (Phylogeny) tree was constructed using Mega 5.0 software. The sequences of the 63 analyzed samples were aligned with references sequences of different haplogroups. The phylogeny result showed the presence of three haplogroups (HapA, HapB and HapC) in the 63 examined samples. The other two haplogroups described in literature (HapD and HapE) were not found. The result showed that 50 out of 63 tested animals cluster with haplogroup B (79.37%) whereas 7 tested animals cluster with haplogroup A (11.11%) and 6 animals cluster with haplogroup C (9.52%). In conclusion, the phylogenetic reconstructions showed that the majority of Egyptian sheep breeds belonging to haplogroup B which is the dominant haplogroup in Eastern Mediterranean countries like Syria and Turkey. Some individuals are belonging to haplogroups A and C, suggesting that the crosses were done with other breeds for characteristic selection for growth and wool quality.Keywords: cytochrome B, diversity, phylogheny, Egyptian sheep breeds
Procedia PDF Downloads 375442 Remote Radiation Mapping Based on UAV Formation
Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov
Abstract:
High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation
Procedia PDF Downloads 101441 In vitro Evaluation of Capsaicin Patches for Transdermal Drug Delivery
Authors: Alija Uzunovic, Sasa Pilipovic, Aida Sapcanin, Zahida Ademovic, Berina Pilipović
Abstract:
Capsaicin is a naturally occurring alkaloid extracted from capsicum fruit extracts of different of Capsicum species. It has been employed topically to treat many diseases such as rheumatoid arthritis, osteoarthritis, cancer pain and nerve pain in diabetes. The high degree of pre-systemic metabolism of intragastrical capsaicin and the short half-life of capsaicin by intravenous administration made topical application of capsaicin advantageous. In this study, we have evaluated differences in the dissolution characteristics of capsaicin patch 11 mg (purchased from market) at different dissolution rotation speed. The proposed patch area is 308 cm2 (22 cm x 14 cm; it contains 36 µg of capsaicin per square centimeter of adhesive). USP Apparatus 5 (Paddle Over Disc) is used for transdermal patch testing. The dissolution study was conducted using USP apparatus 5 (n=6), ERWEKA DT800 dissolution tester (paddle-type) with addition of a disc. The fabricated patch of 308 cm2 is to be cut into 9 cm2 was placed against a disc (delivery side up) retained with the stainless-steel screen and exposed to 500 mL of phosphate buffer solution pH 7.4. All dissolution studies were carried out at 32 ± 0.5 °C and different rotation speed (50± 5; 100± 5 and 150± 5 rpm). 5 ml aliquots of samples were withdrawn at various time intervals (1, 4, 8 and 12 hours) and replaced with 5 ml of dissolution medium. Withdrawn were appropriately diluted and analyzed by reversed-phase liquid chromatography (RP-LC). A Reversed Phase Liquid Chromatography (RP-LC) method has been developed, optimized and validated for the separation and quantitation of capsaicin in a transdermal patch. The method uses a ProntoSIL 120-3-C18 AQ 125 x 4,0 mm (3 μm) column maintained at 600C. The mobile phase consisted of acetonitrile: water (50:50 v/v), the flow rate of 0.9 mL/min, the injection volume 10 μL and the detection wavelength 222 nm. The used RP-LC method is simple, sensitive and accurate and can be applied for fast (total chromatographic run time was 4.0 minutes) and simultaneous analysis of capsaicin and dihydrocapsaicin in a transdermal patch. According to the results obtained in this study, we can conclude that the relative difference of dissolution rate of capsaicin after 12 hours was elevated by increase of dissolution rotation speed (100 rpm vs 50 rpm: 84.9± 11.3% and 150 rpm vs 100 rpm: 39.8± 8.3%). Although several apparatus and procedures (USP apparatus 5, 6, 7 and a paddle over extraction cell method) have been used to study in vitro release characteristics of transdermal patches, USP Apparatus 5 (Paddle Over Disc) could be considered as a discriminatory test. would be able to point out the differences in the dissolution rate of capsaicin at different rotation speed.Keywords: capsaicin, in vitro, patch, RP-LC, transdermal
Procedia PDF Downloads 228440 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 63439 Gauging Floral Resources for Pollinators Using High Resolution Drone Imagery
Authors: Nicholas Anderson, Steven Petersen, Tom Bates, Val Anderson
Abstract:
Under the multiple-use management regime established in the United States for federally owned lands, government agencies have come under pressure from commercial apiaries to grant permits for the summer pasturing of honeybees on government lands. Federal agencies have struggled to integrate honeybees into their management plans and have little information to make regulations that resolve how many colonies should be allowed in a single location and at what distance sets of hives should be placed. Many conservation groups have voiced their concerns regarding the introduction of honeybees to these natural lands, as they may outcompete and displace native pollinating species. Assessing the quality of an area in regard to its floral resources, pollen, and nectar can be important when attempting to create regulations for the integration of commercial honeybee operations into a native ecosystem. Areas with greater floral resources may be able to support larger numbers of honeybee colonies, while poorer resource areas may be less resilient to introduced disturbances. Attempts are made in this study to determine flower cover using high resolution drone imagery to help assess the floral resource availability to pollinators in high elevation, tall forb communities. This knowledge will help in determining the potential that different areas may have for honeybee pasturing and honey production. Roughly 700 images were captured at 23m above ground level using a drone equipped with a Sony QX1 RGB 20-megapixel camera. These images were stitched together using Pix4D, resulting in a 60m diameter high-resolution mosaic of a tall forb meadow. Using the program ENVI, a supervised maximum likelihood classification was conducted to calculate the percentage of total flower cover and flower cover by color (blue, white, and yellow). A complete vegetation inventory was taken on site, and the major flowers contributing to each color class were noted. An accuracy assessment was performed on the classification yielding an 89% overall accuracy and a Kappa Statistic of 0.855. With this level of accuracy, drones provide an affordable and time efficient method for the assessment of floral cover in large areas. The proximal step of this project will now be to determine the average pollen and nectar loads carried by each flower species. The addition of this knowledge will result in a quantifiable method of measuring pollen and nectar resources of entire landscapes. This information will not only help land managers determine stocking rates for honeybees on public lands but also has applications in the agricultural setting, aiding producers in the determination of the number of honeybee colonies necessary for proper pollination of fruit and nut crops.Keywords: honeybee, flower, pollinator, remote sensing
Procedia PDF Downloads 142438 Analyzing the Websites of Institutions Publishing Global Rankings of Universities: A Usability Study
Authors: Nuray Baltaci, Kursat Cagiltay
Abstract:
University rankings which can be seen as nouveau topic are at the center of focus and followed closely by different parties. Students are interested in university rankings in order to make informed decisions about the selection of their candidate future universities. University administrators and academicians can utilize them to see and evaluate their universities’ relative performance compared to other institutions in terms of including but not limited to academic, economic, and international outlook issues. Local institutions may use those ranking systems, as TUBITAK (The Scientific and Technological Research Council of Turkey) and YOK (Council of Higher Education) do in Turkey, to support students and give scholarships when they want to apply for undergraduate and graduate studies abroad. When it is considered that the ranking systems are concerned by this many different parties, the importance of having clear, easy to use and well-designed websites by ranking institutions will be apprehended. In this paper, a usability study for the websites of four different global university ranking institutions, namely Academic Ranking of World Universities (ARWU), Times Higher Education, QS and University Ranking by Academic Performance (URAP), was conducted. User-based approach was adopted and usability tests were conducted with 10 graduate students at Middle East Technical University in Ankara, Turkey. Before performing the formal usability tests, a pilot study had been completed to reflect the necessary changes to the settings of the study. Participants’ demographics, task completion times, paths traced to complete tasks, and their satisfaction levels on each task and website were collected. According to the analyses of the collected data, those ranking websites were compared in terms of efficiency, effectiveness and satisfaction dimensions of usability as pointed in ISO 9241-11. Results showed that none of the selected ranking websites is superior to other ones in terms of overall effectiveness and efficiency of the website. However the only remarkable result was that the highest average task completion times for two of the designed tasks belong to the Times Higher Education Rankings website. Evaluation of the user satisfaction on each task and each website produced slightly different but rather similar results. When the satisfaction levels of the participants on each task are examined, it was seen that the highest scores belong to ARWU and URAP websites. The overall satisfaction levels of the participants for each website showed that the URAP website has highest score followed by ARWU website. In addition, design problems and powerful design features of those websites reported by the participants are presented in the paper. Since the study mainly tackles about the design problems of the URAP website, the focus is on this website. Participants reported 3 main design problems about the website which are unaesthetic and unprofessional design style of the website, improper map location on ranking pages, and improper listing of the field names on field ranking page.Keywords: university ranking, user-based approach, website usability, design
Procedia PDF Downloads 397437 Human Identification Using Local Roughness Patterns in Heartbeat Signal
Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori
Abstract:
Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification
Procedia PDF Downloads 405436 Shale Gas and Oil Resource Assessment in Middle and Lower Indus Basin of Pakistan
Authors: Amjad Ali Khan, Muhammad Ishaq Saqi, Kashif Ali
Abstract:
The focus of hydrocarbon exploration in Pakistan has been primarily on conventional hydrocarbon resources. Directorate General Petroleum Concessions (DGPC) has taken the lead on the assessment of indigenous unconventional oil and gas resources, which has resulted in a ‘Shale Oil/Gas Resource Assessment Study’ conducted with the help of USAID. This was critically required in the energy-starved Pakistan, where the gap between indigenous oil & gas production and demand continues to widen for a long time. Exploration & exploitation of indigenous unconventional resources of Pakistan have become vital to meet our energy demand and reduction of oil and gas import bill of the country. This study has attempted to bridge a critical gap in geological information about the potential of shale gas & oil in Pakistan in the four formations, i.e., Sembar, Lower Goru, Ranikot and Ghazij in the Middle and Lower Indus Basins, which were selected for the study as for resource assessment for shale gas & oil. The primary objective of the study was to estimate and establish shale oil/gas resource assessment of the study area by carrying out extensive geological analysis of exploration, appraisal and development wells drilled in the Middle and Lower Indus Basins, along with identification of fairway(s) and sweet spots in the study area. The Study covers the Lower parts of the Middle Indus basins located in Sindh, southern Punjab & eastern parts of the Baluchistan provinces, with a total sedimentary area of 271,795 km2. Initially, 1611 wells were reviewed, including 1324 wells drilled through different shale formations. Based on the availability of required technical data, a detailed petrophysical analysis of 124 wells (21 Confidential & 103 in the public domain) has been conducted for the shale gas/oil potential of the above-referred formations. The core & cuttings samples of 32 wells and 33 geochemical reports of prospective Shale Formations were available, which were analyzed to calibrate the results of petrophysical analysis with petrographic/ laboratory analyses to increase the credibility of the Shale Gas Resource assessment. This study has identified the most prospective intervals, mainly in Sembar and Lower Goru Formations, for shale gas/oil exploration in the Middle and Lower Indus Basins of Pakistan. The study recommends seven (07) sweet spots for undertaking pilot projects, which will enable to evaluate of the actual production capability and production sustainability of shale oil/gas reservoirs of Pakistan for formulating future strategies to explore and exploit shale/oil resources of Pakistan including fiscal incentives required for developing shale oil/gas resources of Pakistan. Some E&P Companies are being persuaded to make a consortium for undertaking pilot projects that have shown their willingness to participate in the pilot project at appropriate times. The location for undertaking the pilot project has been finalized as a result of a series of technical sessions by geoscientists of the potential consortium members after the review and evaluation of available studies.Keywords: conventional resources, petrographic analysis, petrophysical analysis, unconventional resources, shale gas & oil, sweet spots
Procedia PDF Downloads 51435 Mapping Vulnerabilities: A Social and Political Study of Disasters in Eastern Himalayas, Region of Darjeeling
Authors: Shailendra M. Pradhan, Upendra M. Pradhan
Abstract:
Disasters are perennial features of human civilization. The recurring earthquakes, floods, cyclones, among others, that result in massive loss of lives and devastation, is a grim reminder of the fact that, despite all our success stories of development, and progress in science and technology, human society is perennially at risk to disasters. The apparent threat of climate change and global warming only severe our disaster risks. Darjeeling hills, situated along Eastern Himalayan region of India, and famous for its three Ts – tea, tourism and toy-train – is also equally notorious for its disasters. The recurring landslides and earthquakes, the cyclone Aila, and the Ambootia landslides, considered as the largest landslide in Asia, are strong evidence of the vulnerability of Darjeeling hills to natural disasters. Given its geographical location along the Hindu-Kush Himalayas, the region is marked by rugged topography, geo-physically unstable structure, high-seismicity, and fragile landscape, making it prone to disasters of different kinds and magnitudes. Most of the studies on disasters in Darjeeling hills are, however, scientific and geographical in orientation that focuses on the underlying geological and physical processes to the neglect of social and political conditions. This has created a tendency among the researchers and policy-makers to endorse and promote a particular type of discourse that does not consider the social and political aspects of disasters in Darjeeling hills. Disaster, this paper argues, is a complex phenomenon, and a result of diverse factors, both physical and human. The hazards caused by the physical and geological agents, and the vulnerabilities produced and rooted in political, economic, social and cultural structures of a society, together result in disasters. In this sense, disasters are as much a result of political and economic conditions as it is of physical environment. The human aspect of disasters, therefore, compels us to address intricating social and political challenges that ultimately determine our resilience and vulnerability to disasters. Set within the above milieu, the aims of the paper are twofold: a) to provide a political and sociological account of disasters in Darjeeling hills; and, b) to identify and address the root causes of its vulnerabilities to disasters. In situating disasters in Darjeeling Hills, the paper adopts the Pressure and Release Model (PAR) that provides a theoretical insight into the study of social and political aspects of disasters, and to examine myriads of other related issues therein. The PAR model conceptualises risk as a complex combination of vulnerabilities, on the one hand, and hazards, on the other. Disasters, within the PAR framework, occur when hazards interact with vulnerabilities. The root causes of vulnerability, in turn, could be traced to social and political structures such as legal definitions of rights, gender relations, and other ideological structures and processes. In this way, the PAR model helps the present study to identify and unpack the root causes of vulnerabilities and disasters in Darjeeling hills that have largely remained neglected in dominant discourses, thereby providing a more nuanced and sociologically sensitive understanding of disasters.Keywords: Darjeeling, disasters, PAR, vulnerabilities
Procedia PDF Downloads 273434 Diasporic Literature
Authors: Shamsher Singh
Abstract:
The Diaspora Literature involves a concept of native land, from where the displacement occurs and a record of harsh journeys undertaken on account of economic compulsions. Basically, Diaspora is a splintered community living in eviction. The scattering (initially) signifies the location of a fluid human autonomous space involving a complex set of negotiations and exchange between the nostalgia and desire for the native land and the making of a new home, adapting to the relationships between the minority and majority, being spokes persons for minority rights and their people back native place and significantly transacting the Contact Zone - a space changed with the possibility of multiple challenges. They write in the background of the sublime qualities of their homeland and, at the same time, try to fit themselves into the traditions and cultural values of other strange communities or land. It also serves as an interconnection of the various cultures involved, and it is used to understand the customs of different cultures and countries; it is also a source of inspiration globally. Although diasporic literature originated back in the 20th century, it spread to other countries like Britain, Canada, America, Denmark, Netherland, Australia, Kenya, Sweden, Kuwait and different parts of Europe. Meaning of Diaspora is the combination of two words which means the movement of people away from their own country or motherland. From a historical point of view, the ‘Diaspora’ is often associated with Jewish bigotry. At the moment, the Diaspora is used for the dispersal of social or cultural groups. This group will be living in two different streams of cultures at the same time. One who left behind his culture and the other has to adapt himself to new cultural situations. The diasporic mind hangs between his birth land and place of work at the same time. A person’s mental state, living in dual existence, gives birth to Dysphoria sensation. Litterateurs had different experiences in this type of sensation e.g., social, universal, political, economic and experiences from the strange land. The struggle of these experiences is seen in diasporic literature. When a person moves to different land or country to fulfill his dreams, the discrimination of language, work and other difficulties with strangers make his relationship more emotional and deeper into his past. These past memories and relations create more difficulties in settling in a foreign land. He lives there physically, but his mental state is in his past constantly, and he ends up his life in those background memories. A person living in Diaspora is actually a dual visionary man. Although this double vision expands his global consciousness, due to this vision, he gains judgemental qualities to understand others. At the same time, he weighs his respect for his native land and the situations of foreign land he experiences, and he finds it difficult to survive in those conditions. It can be said that diaspora literature indicates a person or social organization who lives dual life inquisition structure which becomes the cause of diasporic literature.Keywords: homeland sickness, language problem, quest for identity, materialistic desire
Procedia PDF Downloads 68433 The Effect of Manure Loaded Biochar on Soil Microbial Communities
Authors: T. Weber, D. MacKenzie
Abstract:
The script in this paper describes the use of advanced simulation environment using electronic systems (microcontroller, operational amplifiers, and FPGA). The simulation was used for non-linear dynamic systems behaviour with required observer structure working with parallel real-time simulation based on state-space representation. The proposed deposited model was used for electrodynamic effects including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time and such systems. For further purpose, the spatial temperature distribution may also be used. With upon system, the uncertainties and disturbances may be determined. This provides the estimation of the more precise system states for the required system and additionally the estimation of the ionising disturbances that arise due to radiation effects in space systems. The results have also shown that a system can be developed specifically with the real-time calculation (estimation) of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. TID (Total Ionising Dose) of 1 Gy and Single Effect Transient (SET) free operation up to 50 MeVcm²/mg may assure certain functions. Single-Event Latch-up (SEL) results on the placement of several transistors in the shared substrate of an integrated circuit; ionising radiation can activate an additional parasitic thyristor. This short circuit between semiconductor-elements can destroy the device without protection and measurements. Single-Event Burnout (SEB) on the other hand, increases current between drain and source of a MOSFET and destroys the component in a short time. A Single-Event Gate Rupture (SEGR) can destroy a dielectric of semiconductor also. In order to be able to react to these processes, it must be calculated within a shorter time that ionizing radiation and dose is present. For this purpose, sensors may be used for the realistic evaluation of the diffusion and ionizing effects of the test system. For this purpose, the Peltier element is used for the evaluation of the dynamic temperature increases (dT/dt), from which a measure of the ionization processes and thus radiation will be detected. In addition, the piezo element may be used to record highly dynamic vibrations and oscillations to absorb impacts of charged particle flux. All available sensors shall be used to calibrate the spatial distributions also. By measured value of size and known location of the sensors, the entire distribution in space can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms.Keywords: cattle, biochar, manure, microbial activity
Procedia PDF Downloads 103432 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 78431 A Modified QuEChERS Method Using Activated Carbon Fibers as r-DSPE Sorbent for Sample Cleanup: Application to Pesticides Residues Analysis in Food Commodities Using GC-MS/MS
Authors: Anshuman Srivastava, Shiv Singh, Sheelendra Pratap Singh
Abstract:
A simple, sensitive and effective gas chromatography tandem mass spectrometry (GC-MS/MS) method was developed for simultaneous analysis of multi pesticide residues (organophosphate, organochlorines, synthetic pyrethroids and herbicides) in food commodities using phenolic resin based activated carbon fibers (ACFs) as reversed-dispersive solid phase extraction (r-DSPE) sorbent in modified QuEChERS (Quick Easy Cheap Effective Rugged Safe) method. The acetonitrile-based QuEChERS technique was used for the extraction of the analytes from food matrices followed by sample cleanup with ACFs instead of traditionally used primary secondary amine (PSA). Different physico-chemical characterization techniques such as Fourier transform infrared spectroscopy, scanning electron microscopy, X-ray diffraction and Brunauer-Emmet-Teller surface area analysis were employed to investigate the engineering and structural properties of ACFs. The recovery of pesticides and herbicides was tested at concentration levels of 0.02 and 0.2 mg/kg in different commodities such as cauliflower, cucumber, banana, apple, wheat and black gram. The recoveries of all twenty-six pesticides and herbicides were found in acceptable limit (70-120%) according to SANCO guideline with relative standard deviation value < 15%. The limit of detection and limit of quantification of the method was in the range of 0.38-3.69 ng/mL and 1.26 -12.19 ng/mL, respectively. In traditional QuEChERS method, PSA used as r-DSPE sorbent plays a vital role in sample clean-up process and demonstrates good recoveries for multiclass pesticides. This study reports that ACFs are better in terms of removal of co-extractives in comparison of PSA without compromising the recoveries of multi pesticides from food matrices. Further, ACF replaces the need of charcoal in addition to the PSA from traditional QuEChERS method which is used to remove pigments. The developed method will be cost effective because the ACFs are significantly cheaper than the PSA. So the proposed modified QuEChERS method is more robust, effective and has better sample cleanup efficiency for multiclass multi pesticide residues analysis in different food matrices such as vegetables, grains and fruits.Keywords: QuEChERS, activated carbon fibers, primary secondary amine, pesticides, sample preparation, carbon nanomaterials
Procedia PDF Downloads 275430 A Socio-Spatial Analysis of Financialization and the Formation of Oligopolies in Brazilian Basic Education
Authors: Gleyce Assis Da Silva Barbosa
Abstract:
In recent years, we have witnessed a vertiginous growth of large education companies. Daughters of national and world capital, these companies expand both through consolidated physical networks in the form of branches spread across the territory and through institutional networks such as business networks through mergers, acquisitions, creation of new companies and influence. They do this by incorporating small, medium and large schools and universities, teaching systems and other products and services. They are also able to weave their webs directly or indirectly in philanthropic circles, limited partnerships, family businesses and even in public education through various mechanisms of outsourcing, privatization and commercialization of products for the sector. Although the growth of these groups in basic education seems to us a recent phenomenon in peripheral countries such as Brazil, its diffusion is closely linked to higher education conglomerates and other sectors of the economy forming oligopolies, which began to expand in the 1990s with strong state support and through political reforms that redefined its role, transforming it into a fundamental agent in the formation of guidelines to boost the incorporation of neoliberal logic. This expansion occurred through the objectification of education, commodifying it and transforming students into consumer clients. Financial power combined with the neo-liberalization of state public policies allowed the profusion of social exclusion, the increase of individuals without access to basic services, deindustrialization, automation, capital volatility and the indetermination of the economy; in addition, this process causes capital to be valued and devalued at rates never seen before, which together generates various impacts such as the precariousness of work. Understanding the connection between these processes, which engender the economy, allows us to see their consequences in labor relations and in the territory. In this sense, it is necessary to analyze the geographic-economic context and the role of the facilitating agents of this process, which can give us clues about the ongoing transformations and the directions of education in the national and even international scenario since this process is linked to the multiple scales of financial globalization. Therefore, the present research has the general objective of analyzing the socio-spatial impacts of financialization and the formation of oligopolies in Brazilian basic education. For this, the survey of laws, data, and public policies on the subject in question was used as a methodology. As a methodology, the work was based on some data from these companies available on websites for investors. Survey of information from global and national companies that operate in Brazilian basic education. In addition to mapping the expansion of educational oligopolies using public data on the location of schools. With this, the research intends to provide information about the ongoing commodification process in the country. Discuss the consequences of the oligopolization of education, considering the impacts that financialization can bring to teaching work.Keywords: financialization, oligopolies, education, Brazil
Procedia PDF Downloads 65429 Evidence-Based in Telemonitoring of Users with Pacemakers at Five Years after Implant: The Poniente Study
Authors: Antonio Lopez-Villegas, Daniel Catalan-Matamoros, Remedios Lopez-Liria
Abstract:
Objectives: The purpose of this study was to analyze clinical data, health-related quality of life (HRQoL) and functional capacity of patients using a telemonitoring follow-up system (TM) compared to patients followed-up through standard outpatient visits (HM) 5 years after the implantation of a pacemaker. Methods: This is a controlled, non-randomised, nonblinded clinical trial, with data collection carried out at 5 years after the pacemakers implant. The study was developed at Hospital de Poniente (Almeria, Spain), between October 2012 and November 2013. The same clinical outcomes were analyzed in both follow-up groups. Health-Related Quality of Life and Functional Capacity was assessed through EuroQol-5D (EQ-5D) questionnaire and Duke Activity Status Index (DASI) respectively. Sociodemographic characteristics and clinical data were also analyzed. Results: 5 years after pacemaker implant, 55 of 82 initial patients finished the study. Users with pacemakers were assigned to either a conventional follow-up group at hospital (HM=34, 50 initials) or a telemonitoring system group (TM=21, 32 initials). No significant differences were found between both groups according to sociodemographic characteristics, clinical data, Health-Related Quality of Life and Functional Capacity according to medical record and EQ5D and DASI questionnaires. In addition, conventional follow-up visits to hospital were reduced in 44,84% (p < 0,001) in the telemonitoring group in relation to hospital monitoring group. Conclusion: Results obtained in this study suggest that the telemonitoring of users with pacemakers is an equivalent option to conventional follow-up at hospital, in terms of Health-Related Quality of Life and Functional Capacity. Furthermore, it allows for the early detection of cardiovascular and pacemakers-related problem events and significantly reduces the number of in-hospital visits. Trial registration: ClinicalTrials.gov NCT02234245. The PONIENTE study has been funded by the General Secretariat for Research, Development and Innovation, Regional Government of Andalusia (Spain), project reference number PI/0256/2017, under the research call 'Development and Innovation Projects in the Field of Biomedicine and Health Sciences', 2017.Keywords: cardiovascular diseases, health-related quality of life, pacemakers follow-up, remote monitoring, telemedicine
Procedia PDF Downloads 129428 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89
Authors: A. Chatel, I. S. Torreguitart, T. Verstraete
Abstract:
The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness
Procedia PDF Downloads 112427 Analyzing Electromagnetic and Geometric Characterization of Building Insulation Materials Using the Transient Radar Method (TRM)
Authors: Ali Pourkazemi
Abstract:
The transient radar method (TRM) is one of the non-destructive methods that was introduced by authors a few years ago. The transient radar method can be classified as a wave-based non destructive testing (NDT) method that can be used in a wide frequency range. Nevertheless, it requires a narrow band, ranging from a few GHz to a few THz, depending on the application. As a time-of-flight and real-time method, TRM can measure the electromagnetic properties of the sample under test not only quickly and accurately, but also blindly. This means that it requires no prior knowledge of the sample under test. For multi-layer structures, TRM is not only able to detect changes related to any parameter within the multi-layer structure but can also measure the electromagnetic properties of each layer and its thickness individually. Although the temperature, humidity, and general environmental conditions may affect the sample under test, they do not affect the accuracy of the Blind TRM algorithm. In this paper, the electromagnetic properties as well as the thickness of the individual building insulation materials - as a single-layer structure - are measured experimentally. Finally, the correlation between the reflection coefficients and some other technical parameters such as sound insulation, thermal resistance, thermal conductivity, compressive strength, and density is investigated. The sample to be studied is 30 cm x 50 cm and the thickness of the samples varies from a few millimeters to 6 centimeters. This experiment is performed with both biostatic and differential hardware at 10 GHz. Since it is a narrow-band system, high-speed computation for analysis, free-space application, and real-time sensor, it has a wide range of potential applications, e.g., in the construction industry, rubber industry, piping industry, wind energy industry, automotive industry, biotechnology, food industry, pharmaceuticals, etc. Detection of metallic, plastic pipes wires, etc. through or behind the walls are specific applications for the construction industry.Keywords: transient radar method, blind electromagnetic geometrical parameter extraction technique, ultrafast nondestructive multilayer dielectric structure characterization, electronic measurement systems, illumination, data acquisition performance, submillimeter depth resolution, time-dependent reflected electromagnetic signal blind analysis method, EM signal blind analysis method, time domain reflectometer, microwave, milimeter wave frequencies
Procedia PDF Downloads 69426 Ochratoxin-A in Traditional Meat Products from Croatian Households
Authors: Jelka Pleadin, Nina Kudumija, Ana Vulic, Manuela Zadravec, Tina Lesic, Mario Skrivanko, Irena Perkovic, Nada Vahcic
Abstract:
Products of animal origin, such as meat and meat products, can contribute to human mycotoxins’ intake coming as a result of either indirect transfer from farm animals exposed to naturally contaminated grains and feed (carry-over effects) or direct contamination with moulds or naturally contaminated spice mixtures used in meat production. Ochratoxin A (OTA) is mycotoxin considered to be of the outermost importance from the public health standpoint in connection with meat products. The aim of this study was to investigate the occurrence of OTA in different traditional meat products circulating on Croatian markets during 2018, produced by a large number of households situated in eastern and north Croatian regions using a variety of technologies. Concentrations of OTA were determined in traditional meat products (n = 70), including dry fermented sausages (Slavonian kulen, Slavonian sausage, Istrian sausage and domestic sausage; n = 28), dry-cured meat products (pancetta, pork rack and ham; n = 22) and cooked sausages (liver sausages, black pudding sausages and pate; n = 20). OTA was analyzed by use of quantitative screening immunoassay method (ELISA) and confirmed for positive samples (higher than the limit of detection) by liquid chromatography tandem mass spectrometry (LC-MS/MS) method. Whereas the bacon samples contaminated with OTA were not found, its level in dry fermented sausages ranged from 0.22 to 2.17 µg/kg and in dry-cured meat products from 0.47 to 5.35 µg/kg, with in total 9% of positive samples. Besides possible primary contamination of these products arising due to improper manufacturing or/and storage conditions, observed OTA contamination could also be the consequence of secondary contamination that comes as a result of contaminated feed the animals were fed on. OTA levels obtained in cooked sausages ranged from 0.32 to 4.12 µg/kg (5% of positives) and could probably be linked to the contaminated raw materials (liver, kidney and spices) used in the sausages production. The results showed an occasional OTA contamination of traditional meat products, pointing that to avoid such contamination on households these products should be produced and processed under standardized and well-controlled conditions. Further investigations should be performed in order to identify mycotoxin-producing moulds on the surface of the products and to define preventative measures that can reduce the contamination of traditional meat products during their production on households and period of storage.Keywords: Croatian households, ochratoxin-A, traditional cooked sausages, traditional dry-cured meat products
Procedia PDF Downloads 194425 Design and Evaluation of a Prototype for Non-Invasive Screening of Diabetes – Skin Impedance Technique
Authors: Pavana Basavakumar, Devadas Bhat
Abstract:
Diabetes is a disease which often goes undiagnosed until its secondary effects are noticed. Early detection of the disease is necessary to avoid serious consequences which could lead to the death of the patient. Conventional invasive tests for screening of diabetes are mostly painful, time consuming and expensive. There’s also a risk of infection involved, therefore it is very essential to develop non-invasive methods to screen and estimate the level of blood glucose. Extensive research is going on with this perspective, involving various techniques that explore optical, electrical, chemical and thermal properties of the human body that directly or indirectly depend on the blood glucose concentration. Thus, non-invasive blood glucose monitoring has grown into a vast field of research. In this project, an attempt was made to device a prototype for screening of diabetes by measuring electrical impedance of the skin and building a model to predict a patient’s condition based on the measured impedance. The prototype developed, passes a negligible amount of constant current (0.5mA) across a subject’s index finger through tetra polar silver electrodes and measures output voltage across a wide range of frequencies (10 KHz – 4 MHz). The measured voltage is proportional to the impedance of the skin. The impedance was acquired in real-time for further analysis. Study was conducted on over 75 subjects with permission from the institutional ethics committee, along with impedance, subject’s blood glucose values were also noted, using conventional method. Nonlinear regression analysis was performed on the features extracted from the impedance data to obtain a model that predicts blood glucose values for a given set of features. When the predicted data was depicted on Clarke’s Error Grid, only 58% of the values predicted were clinically acceptable. Since the objective of the project was to screen diabetes and not actual estimation of blood glucose, the data was classified into three classes ‘NORMAL FASTING’,’NORMAL POSTPRANDIAL’ and ‘HIGH’ using linear Support Vector Machine (SVM). Classification accuracy obtained was 91.4%. The developed prototype was economical, fast and pain free. Thus, it can be used for mass screening of diabetes.Keywords: Clarke’s error grid, electrical impedance of skin, linear SVM, nonlinear regression, non-invasive blood glucose monitoring, screening device for diabetes
Procedia PDF Downloads 326424 Developing Digital Competencies in Aboriginal Students through University-College Partnerships
Authors: W. S. Barber, S. L. King
Abstract:
This paper reports on a pilot project to develop a collaborative partnership between a community college in rural northern Ontario, Canada, and an urban university in the greater Toronto area in Oshawa, Canada. Partner institutions will collaborate to address learning needs of university applicants whose goals are to attain an undergraduate university BA in Educational Studies and Digital Technology degree, but who may not live in a geographical location that would facilitate this pathways process. The UOIT BA degree is attained through a 2+2 program, where students with a 2 year college diploma or equivalent can attain a four year undergraduate degree. The goals reported on the project are as: 1. Our aim is to expand the BA program to include an additional stream which includes serious educational games, simulations and virtual environments, 2. Develop fully (using both synchronous and asynchronous technologies) online learning modules for use by university applicants who otherwise are not geographically located close to a physical university site, 3. Assess the digital competencies of all students, including members of local, distance and Indigenous communities using a validated tool developed and tested by UOIT across numerous populations. This tool, the General Technical Competency Use and Scale (GTCU) will provide the collaborating institutions with data that will allow for analyzing how well students are prepared to succeed in fully online learning communities. Philosophically, the UOIT BA program is based on a fully online learning communities model (FOLC) that can be accessed from anywhere in the world through digital learning environments via audio video conferencing tools such as Adobe Connect. It also follows models of adult learning and mobile learning, and makes a university degree accessible to the increasing demographic of adult learners who may use mobile devices to learn anywhere anytime. The program is based on key principles of Problem Based Learning, allowing students to build their own understandings through the co-design of the learning environment in collaboration with the instructors and their peers. In this way, this degree allows students to personalize and individualize the learning based on their own culture, background and professional/personal experiences. Using modified flipped classroom strategies, students are able to interrogate video modules on their own time in preparation for one hour discussions occurring in video conferencing sessions. As a consequence of the program flexibility, students may continue to work full or part time. All of the partner institutions will co-develop four new modules, administer the GTCU and share data, while creating a new stream of the UOIT BA degree. This will increase accessibility for students to bridge from community colleges to university through a fully digital environment. We aim to work collaboratively with Indigenous elders, community members and distance education instructors to increase opportunities for more students to attain a university education.Keywords: aboriginal, college, competencies, digital, universities
Procedia PDF Downloads 216423 Expression of miRNA 335 in Gall Bladder Cancer: A Correlative Study
Authors: Naseem Fatima, A. N. Srivastava, Tasleem Raza, Vijay Kumar
Abstract:
Introduction: Carcinoma gallbladder is third most common gastrointestinal lethal disease with the highest incidence and mortality rate among women in Northern India. Scientists have found several risk factors that make a person more likely to develop gallbladder cancer; among these risk factors, deregulation of miRNAs has been demonstrated to be one of the most crucial factors. The changes in the expression of specific miRNA genes result in the control of inflammation, cell cycle regulation, stress response, proliferation, differentiation, apoptosis and invasion thus mediate the process in tumorgenesis. The aim of this study was to investigate the role of MiRNA-335 and may as a molecular marker in early detection of gallbladder cancer in suspected cases. Material and Methods: A total of 20 consecutive patients with gallbladder cancer aged between 30-75 years were registered for the study. Total RNA was extracted from tissue by using the mirVANA MiRNA isolation Kit according to the manufacturer’s protocol. The MiRNA- 335 and U6 snRNA-specific cDNA were reverse-transcribed from total RNA using Taqman microRNA reverse-transcription kit according to the manufacturer’s protocol. TaqMan MiRNA probes hsa-miR-335 and Taqman Master Mix without AmpEase UNG, Individual real-time PCR assays were performed in a 20 μL reaction volume on a Real-Time PCR system (Applied Biosystems StepOnePlus™) to detect MiRNA-335 expression in tissue. Relative quantification of target MiRNA expression was evaluated using the comparative cycle threshold (CT) method. The correlation was done in between cycle threshold (CT Value) of target MiRNA in gallbladder cancer with respect to non-cancerous Cholelithiasis gallbladder. Each sample was examined in triplicate. The Newman-Keuls Multiple Comparison Test was used to determine the expression of miR-335. Results: MiRNA335 was found to be significantly downregulated in the gallbladder cancer tissue (P<0.001), when compared with non-cancerous Cholelithiasis gallbladder cases. Out of 20 cases, 75% showed reduced expression of MiRNA335, were at last stage of disease with low overall survival rate and remaining 25% were showed up-regulated expression of MiRNA335 with high survival rate. Conclusion: The present study showed that reduced expression of MiRNA335 is associated with the advancement of the disease, and its deregulation may provide important clues to understanding it as a prognostic marker and opportunities for future research.Keywords: carcinoma gallbladder, downregulation, MiRNA-335, RT-PCR assay
Procedia PDF Downloads 361422 Management Tools for Assessment of Adverse Reactions Caused by Contrast Media at the Hospital
Authors: Pranee Suecharoen, Ratchadaporn Soontornpas, Jaturat Kanpittaya
Abstract:
Background: Contrast media has an important role for disease diagnosis through detection of pathologies. Contrast media can, however, cause adverse reactions after administration of its agents. Although non-ionic contrast media are commonly used, the incidence of adverse events is relatively low. The most common reactions found (10.5%) were mild and manageable and/or preventable. Pharmacists can play an important role in evaluating adverse reactions, including awareness of the specific preparation and the type of adverse reaction. As most common types of adverse reactions are idiosyncratic or pseudo-allergic reactions, common standards need to be established to prevent and control adverse reactions promptly and effectively. Objective: To measure the effect of using tools for symptom evaluation in order to reduce the severity, or prevent the occurrence, of adverse reactions from contrast media. Methods: Retrospective review descriptive research with data collected on adverse reactions assessment and Naranjo’s algorithm between June 2015 and May 2016. Results: 158 patients (10.53%) had adverse reactions. Of the 1,500 participants with an adverse event evaluation, 137 (9.13%) had a mild adverse reaction, including hives, nausea, vomiting, dizziness, and headache. These types of symptoms can be treated (i.e., with antihistamines, anti-emetics) and the patient recovers completely within one day. The group with moderate adverse reactions, numbering 18 cases (1.2%), had hypertension or hypotension, and shortness of breath. Severe adverse reactions numbered 3 cases (0.2%) and included swelling of the larynx, cardiac arrest, and loss of consciousness, requiring immediate treatment. No other complications under close medical supervision were recorded (i.e., corticosteroids use, epinephrine, dopamine, atropine, or life-saving devices). Using the guideline, therapies are divided into general and specific and are performed according to the severity, risk factors and ingestion of contrast media agents. Patients who have high-risk factors were screened and treated (i.e., prophylactic premedication) for prevention of severe adverse reactions, especially those with renal failure. Thus, awareness for the need for prescreening of different risk factors is necessary for early recognition and prompt treatment. Conclusion: Studying adverse reactions can be used to develop a model for reducing the level of severity and setting a guideline for a standardized, multidisciplinary approach to adverse reactions.Keywords: role of pharmacist, management of adverse reactions, guideline for contrast media, non-ionic contrast media
Procedia PDF Downloads 303421 Treatment Outcome Of Corneal Ulcers Using Levofloxacin Hydrate 1.5% Ophthalmic Solution And Adjuvant Oral Ciprofloxacin, A Treatment Strategy Applicable To Primary Healthcare
Authors: Celine Shi Ying Lee, Jong Jian Lee
Abstract:
Background: Infectious keratitis is one of the leading causes of blindness worldwide. Prompt treatment with effective medication will control the infection early, preventing corneal scarring and visual loss. fluoroquinolones ophthalmic medication is used because of its broad-spectrum properties, potency, good intraocular penetration, and low toxicity. The study aims to evaluate the treatment outcome of corneal ulcers using Levofloxacin 1.5% ophthalmic solution (LVFX) with adjuvant oral ciprofloxacin when indicated and apply this treatment strategy in primary health care as first-line treatment. Methods: Patients with infective corneal ulcer treated in an eye center were recruited. Inclusion criteria includes Corneal infection consistent with bacterial keratitis, single or multiple small corneal ulcers. Treatment regime: LVFX hourly for the first 2 days, 2 hourly from the 3rd day, and 3 hourly on the 5th day of review. Adjuvant oral ciprofloxacin 500mg BD was administered for 5 days if there were multiple corneal ulcers or when the location of the cornea ulcer was central or paracentral. Results: 47 subjects were recruited. There were 16 (34%) males and 31 (66%) females. 40 subjects (85%) were contact lens (CL) related to corneal ulcer, and 7 subjects (15%) were non-contact lens related. 42 subjects (89%) presented with one ulcer, of which 20 of them (48%) needed adjuvant therapy. 5 subjects presented with 2 or 3 ulcers, of which 3 needed adjuvant therapy. A total of 23 subjects (49%) was given adjuvant therapy (oral ciprofloxacin 500mg BD for 5 days).21 of them (91%) were CL related. All subjects recovered fully, and the average duration of treatment was 3.7 days, with 49% of the subjects resolved on the 3rd day, 38% on the 5thday of and 13% on the 7thday. All subjects showed symptoms of relief of pain, light-sensitivity, and redness on the 3rd day with full visual recovery post-treatment. No adverse drug reactions were recorded. Conclusion: Our treatment regime demonstrated good clinical outcome as first-line treatment for corneal ulcers. A corneal ulcer is a common eye condition in Singapore, mainly due to CL wear. Pseudomonas aeruginosa is the most frequent and potentially sight-threatening pathogen involved in CL related corneal ulcer. Coagulase-negative Staphylococci, Staphylococcus aureus, and Streptococcus Pneumoniae were seen in non-CL users. All these bacteria exhibit good sensitivity rates to ciprofloxacin and levofloxacin. It is therefore logical in our study to use LVFX Eyedrops and adjuvant ciprofloxacin oral antibiotics when indicated as first line treatment for most corneal ulcers. Our study of patients, both CL related and non-CL related, have shown good clinical response and full recovery using the above treatment strategy. There was also a full restoration of visual acuity in all the patients. Eye-trained primary Healthcare practitioners can consider adopting this treatment strategy as first line treatment in patients with corneal ulcers. This is relevant during the COVID pandemic, where hospitals are overwhelmed with patients and in regions with limited access to specialist eye care. This strategy would enable early treatment with better clinical outcome.Keywords: corneal ulcer, levofloxacin hydrate, treatment strategy, ciprofloxacin
Procedia PDF Downloads 175420 Atypical Intoxication Due to Fluoxetine Abuse with Symptoms of Amnesia
Authors: Ayse Gul Bilen
Abstract:
Selective serotonin reuptake inhibitors (SSRIs) are commonly prescribed antidepressants that are used clinically for the treatment of anxiety disorders, obsessive-compulsive disorder (OCD), panic disorders and eating disorders. The first SSRI, fluoxetine (sold under the brand names Prozac and Sarafem among others), had an adverse effect profile better than any other available antidepressant when it was introduced because of its selectivity for serotonin receptors. They have been considered almost free of side effects and have become widely prescribed, however questions about the safety and tolerability of SSRIs have emerged with their continued use. Most SSRI side effects are dose-related and can be attributed to serotonergic effects such as nausea. Continuous use might trigger adverse effects such as hyponatremia, tremor, nausea, weight gain, sleep disturbance and sexual dysfunction. Moderate toxicity can be safely observed in the hospital for 24 hours, and mild cases can be safely discharged (if asymptomatic) from the emergency department once cleared by Psychiatry in cases of intentional overdose and after 6 to 8 hours of observation. Although fluoxetine is relatively safe in terms of overdose, it might still be cardiotoxic and inhibit platelet secretion, aggregation, and plug formation. There have been reported clinical cases of seizures, cardiac conduction abnormalities, and even fatalities associated with fluoxetine ingestions. While the medical literature strongly suggests that most fluoxetine overdoses are benign, emergency physicians need to remain cognizant that intentional, high-dose fluoxetine ingestions may induce seizures and can even be fatal due to cardiac arrhythmia. Our case is a 35-year old female patient who was sent to ER with symptoms of confusion, amnesia and loss of orientation for time and location after being found wandering in the streets unconsciously by police forces that informed 112. Upon laboratory examination, no pathological symptom was found except sinus tachycardia in the EKG and high levels of aspartate transaminase (AST) and alanine transaminase (ALT). Diffusion MRI and computed tomography (CT) of the brain all looked normal. Upon physical and sexual examination, no signs of abuse or trauma were found. Test results for narcotics, stimulants and alcohol were negative as well. There was a presence of dysrhythmia which required admission to the intensive care unit (ICU). The patient gained back her conscience after 24 hours. It was discovered from her story afterward that she had been using fluoxetine due to post-traumatic stress disorder (PTSD) for 6 months and that she had attempted suicide after taking 3 boxes of fluoxetine due to the loss of a parent. She was then transferred to the psychiatric clinic. Our study aims to highlight the need to consider toxicologic drug use, in particular, the abuse of selective serotonin reuptake inhibitors (SSRIs), which have been widely prescribed due to presumed safety and tolerability, for diagnosis of patients applying to the emergency room (ER).Keywords: abuse, amnesia, fluoxetine, intoxication, SSRI
Procedia PDF Downloads 201419 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal
Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi
Abstract:
The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.Keywords: low income, subsidy policy, distributed energy resources, energy justice
Procedia PDF Downloads 115418 Disaster Management Approach for Planning an Early Response to Earthquakes in Urban Areas
Authors: Luis Reynaldo Mota-Santiago, Angélica Lozano
Abstract:
Determining appropriate measures to face earthquakesarea challenge for practitioners. In the literature, some analyses consider disaster scenarios, disregarding some important field characteristics. Sometimes, software that allows estimating the number of victims and infrastructure damages is used. Other times historical information of previous events is used, or the scenarios’informationis assumed to be available even if it isnot usual in practice. Humanitarian operations start immediately after an earthquake strikes, and the first hours in relief efforts are important; local efforts are critical to assess the situation and deliver relief supplies to the victims. A preparation action is prepositioning stockpiles, most of them at central warehouses placed away from damage-prone areas, which requires large size facilities and budget. Usually, decisions in the first 12 hours (standard relief time (SRT)) after the disaster are the location of temporary depots and the design of distribution paths. The motivation for this research was the delay in the reaction time of the early relief efforts generating the late arrival of aid to some areas after the Mexico City 7.1 magnitude earthquake in 2017. Hence, a preparation approach for planning the immediate response to earthquake disasters is proposed, intended for local governments, considering their capabilities for planning and for responding during the SRT, in order to reduce the start-up time of immediate response operations in urban areas. The first steps are the generation and analysis of disaster scenarios, which allow estimatethe relief demand before and in the early hours after an earthquake. The scenarios can be based on historical data and/or the seismic hazard analysis of an Atlas of Natural Hazards and Risk as a way to address the limited or null available information.The following steps include the decision processes for: a) locating local depots (places to prepositioning stockpiles)and aid-giving facilities at closer places as possible to risk areas; and b) designing the vehicle paths for aid distribution (from local depots to the aid-giving facilities), which can be used at the beginning of the response actions. This approach allows speeding up the delivery of aid in the early moments of the emergency, which could reduce the suffering of the victims allowing additional time to integrate a broader and more streamlined response (according to new information)from national and international organizations into these efforts. The proposed approachis applied to two case studies in Mexico City. These areas were affectedby the 2017’s earthquake, having limited aid response. The approach generates disaster scenarios in an easy way and plans a faster early response with a short quantity of stockpiles which can be managed in the early hours of the emergency by local governments. Considering long-term storage, the estimated quantities of stockpiles require a limited budget to maintain and a small storage space. These stockpiles are useful also to address a different kind of emergencies in the area.Keywords: disaster logistics, early response, generation of disaster scenarios, preparation phase
Procedia PDF Downloads 110417 Microplastic Concentrations in Cultured Oyster in Two Bays of Baja California, Mexico
Authors: Eduardo Antonio Lozano Hernandez, Nancy Ramirez Alvarez, Lorena Margarita Rios Mendoza, Jose Vinicio Macias Zamora, Felix Augusto Hernandez Guzman, Jose Luis Sanchez Osorio
Abstract:
Microplastics (MPs) are one of the most numerous reported wastes found in the marine ecosystem, representing one of the greatest risks for organisms that inhabit that environment due to their bioavailability. Such is the case of bivalve mollusks, since they are capable of filtering large volumes of water, which increases the risk of contamination by microplastics through the continuous exposure to these materials. This study aims to determine, quantify and characterize microplastics found in the cultured oyster Crassostrea gigas. We also analyzed if there are spatio-temporal differences in the microplastic concentration of organisms grown in two bays having quite different human population. In addition, we wanted to have an idea of the possible impact on humans via consumption of these organisms. Commercial size organisms (>6cm length; n = 15) were collected by triplicate from eight oyster farming sites in Baja California, Mexico during winter and summer. Two sites are located in Todos Santos Bay (TSB), while the other six are located in San Quintin Bay (SQB). Site selection was based on commercial concessions for oyster farming in each bay. The organisms were chemically digested with 30% KOH (w/v) and 30% H₂O₂ (v/v) to remove the organic matter and subsequently filtered using a GF/D filter. All particles considered as possible MPs were quantified according to their physical characteristics using a stereoscopic microscope. The type of synthetic polymer was determined using a FTIR-ATR microscope and using a user as well as a commercial reference library (Nicolet iN10 Thermo Scientific, Inc.) of IR spectra of plastic polymers (with a certainty ≥70% for polymers pure; ≥50% for composite polymers). Plastic microfibers were found in all the samples analyzed. However, a low incidence of MP fragments was observed in our study (approximately 9%). The synthetic polymers identified were mainly polyester and polyacrylonitrile. In addition, polyethylene, polypropylene, polystyrene, nylon, and T. elastomer. On average, the content of microplastics in organisms were higher in TSB (0.05 ± 0.01 plastic particles (pp)/g of wet weight) than found in SQB (0.02 ± 0.004 pp/g of wet weight) in the winter period. The highest concentration of MPs found in TSB coincides with the rainy season in the region, which increases the runoff from streams and wastewater discharges to the bay, as well as the larger population pressure (> 500,000 inhabitants). Otherwise, SQB is a mainly rural location, where surface runoff from streams is minimal and in addition, does not have a wastewater discharge into the bay. During the summer, no significant differences (Manne-Whitney U test; P=0.484) were observed in the concentration of MPs found in the cultured oysters of TSB and SQB, (average: 0.01 ± 0.003 pp/g and 0.01 ± 0.002 pp/g, respectively). Finally, we concluded that the consumption of oyster does not represent a risk for humans due to the low concentrations of MPs found. The concentration of MPs is influenced by the variables such as temporality, circulations dynamics of the bay and existing demographic pressure.Keywords: FTIR-ATR, Human risk, Microplastic, Oyster
Procedia PDF Downloads 174416 Seafloor and Sea Surface Modelling in the East Coast Region of North America
Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk
Abstract:
Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.Keywords: seafloor, sea surface height, bathymetry, satellite altimetry
Procedia PDF Downloads 81415 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264
Authors: V. Ziegler, F. Schneider, M. Pesch
Abstract:
With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection
Procedia PDF Downloads 151