Search results for: claim management system
1094 The Effects of Chamomile on Serum Levels of Inflammatory Indexes to a Bout of Eccentric Exercise in Young Women
Authors: K. Azadeh, M. Ghasemi, S. Fazelifar
Abstract:
Aim: Changes in stress hormones can be modify response of immune system. Cortisol as the most important body corticosteroid is anti-inflammatory and immunosuppressive hormone. Normal levels of cortisol in humans has fluctuated during the day, In other words, cortisol is released periodically, and regulate through the release of ACTH circadian rhythm in every day. Therefore, the aim of this study was to determine the effects of Chamomile on serum levels of inflammatory indexes to a bout of eccentric exercise in young women. Methodology: 32 women were randomly divided into 4 groups: high dose of Chamomile, low dose of Chamomile, ibuprofen and placebo group. Eccentric exercise included 5 set and rest period between sets was 1 minute. For this purpose, subjects warm up 10 min and then done eccentric exercise. Each participant completed 15 repetitions with optional 20 kg weight or until can’t continue moving. When the subject was no longer able to continue to move, immediately decreased 5 kg from the weight and the protocol continued until cause exhaustion or complete 15 repetitions. Also, subjects received specified amount of ibuprofen and Chamomile capsules in target groups. Blood samples in 6 stages (pre of starting pill, pre of exercise protocol, 4, 24, 48 and 72 hours after eccentric exercise) was obtained. The levels of cortisol and adrenocorticotropic hormone levels were measured by ELISA way. K-S test to determine the normality of the data and analysis of variance for repeated measures was used to analyze the data. A significant difference in the p < 0/05 accepted. Results: The results showed that Individual characteristics including height, weight, age and body mass index were not significantly different among the four groups. Analyze of data showed that cortisol and ACTH basic levels significantly decreased after supplementation consumption, but then gradually significantly increased in all stages of post exercise. In High dose of Chamomile group, increasing tendency of post exercise somewhat less than other groups, but not to a significant level. The inter-group analysis results indicate that time effect had a significant impact in different stages of the groups. Conclusion: The results of this study, one session of eccentric exercise increased cortisol and ACTH hormone. The results represent the effect of high dose of Chamomile in the prevention and reduction of increased stress hormone levels. As regards use of medicinal plants and ibuprofen as a pain medication and inflammation has spread among athletes and non-athletes, the results of this research can provide information about the advantages and disadvantages of using medicinal plants and ibuprofen.Keywords: chamomile, inflammatory indexes, eccentric exercise, young girls
Procedia PDF Downloads 4171093 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine
Procedia PDF Downloads 1621092 When the Rubber Hits the Road: The Enactment of Well-Intentioned Language Policy in Digital vs. In Situ Spaces on Washington, DC Public Transportation
Authors: Austin Vander Wel, Katherin Vargas Henao
Abstract:
Washington, DC, is a city in which Spanish, along with several other minority languages, is prevalent not only among tourists but also those living within city limits. In response to this linguistic diversity and DC’s adoption of the Language Access Act in 2004, the Washington Metropolitan Area Transit Authority (WMATA) committed to addressing the need for equal linguistic representation and established a five-step plan to provide the best multilingual information possible for public transportation users. The current study, however, strongly suggests that this de jure policy does not align with the reality of Spanish’s representation on DC public transportation–although perhaps doing so in an unexpected way. In order to investigate Spanish’s de facto representation and how it contrasts with de jure policy, this study implements a linguistic landscapes methodology that takes critical language-policy as its theoretical framework (Tollefson, 2005). Specifically concerning de facto representation, it focuses on the discrepancies between digital spaces and the actual physical spaces through which users travel. These digital vs. in situ conditions are further analyzed by separately addressing aural and visual modalities. In digital spaces, data was collected from WMATA’s website (visual) and their bilingual hotline (aural). For in situ spaces, both bus and metro areas of DC public transportation were explored, with signs comprising the visual modality and recordings, driver announcements, and interactions with metro kiosk workers comprising the aural modality. While digital spaces were considered to successfully fulfill WMATA’s commitment to representing Spanish as outlined in the de jure policy, physical spaces show a large discrepancy between what is said and what is done, particularly regarding the bus system, in addition to the aural modality overall. These discrepancies in situ spaces place Spanish speakers at a clear disadvantage, demanding additional resources and knowledge on the part of residents with limited or no English proficiency in order to have equal access to this public good. Based on our critical language-policy analysis, while Spanish is represented as a right in the de jure policy, its implementation in situ clearly portrays Spanish as a problem since those seeking bilingual information can not expect it to be present when and where they need it most (Ruíz, 1984; Tollefson, 2005). This study concludes with practical, data-based steps to improve the current situation facing DC’s public transportation context and serves as a model for responding to inadequate enactment of de jure policy in other language policy settings.Keywords: Urban landscape, language access, critical-language policy, spanish, public transportation
Procedia PDF Downloads 731091 Controlled Drug Delivery System for Delivery of Poor Water Soluble Drugs
Authors: Raj Kumar, Prem Felix Siril
Abstract:
The poor aqueous solubility of many pharmaceutical drugs and potential drug candidates is a big challenge in drug development. Nanoformulation of such candidates is one of the major solutions for the delivery of such drugs. We initially developed the evaporation assisted solvent-antisolvent interaction (EASAI) method. EASAI method is use full to prepared nanoparticles of poor water soluble drugs with spherical morphology and particles size below 100 nm. However, to further improve the effect formulation to reduce number of dose and side effect it is important to control the delivery of drugs. However, many drug delivery systems are available. Among the many nano-drug carrier systems, solid lipid nanoparticles (SLNs) have many advantages over the others such as high biocompatibility, stability, non-toxicity and ability to achieve controlled release of drugs and drug targeting. SLNs can be administered through all existing routes due to high biocompatibility of lipids. SLNs are usually composed of lipid, surfactant and drug were encapsulated in lipid matrix. A number of non-steroidal anti-inflammatory drugs (NSAIDs) have poor bioavailability resulting from their poor aqueous solubility. In the present work, SLNs loaded with NSAIDs such as Nabumetone (NBT), Ketoprofen (KP) and Ibuprofen (IBP) were successfully prepared using different lipids and surfactants. We studied and optimized experimental parameters using a number of lipids, surfactants and NSAIDs. The effect of different experimental parameters such as lipid to surfactant ratio, volume of water, temperature, drug concentration and sonication time on the particles size of SLNs during the preparation using hot-melt sonication was studied. It was found that particles size was directly proportional to drug concentration and inversely proportional to surfactant concentration, volume of water added and temperature of water. SLNs prepared at optimized condition were characterized thoroughly by using different techniques such as dynamic light scattering (DLS), field emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), atomic force microscopy (AFM), X-ray diffraction (XRD) and differential scanning calorimetry and Fourier transform infrared spectroscopy (FTIR). We successfully prepared the SLN of below 220 nm using different lipids and surfactants combination. The drugs KP, NBT and IBP showed 74%, 69% and 53% percentage of entrapment efficiency with drug loading of 2%, 7% and 6% respectively in SLNs of Campul GMS 50K and Gelucire 50/13. In-vitro drug release profile of drug loaded SLNs is shown that nearly 100% of drug was release in 6 h.Keywords: nanoparticles, delivery, solid lipid nanoparticles, hot-melt sonication, poor water soluble drugs, solubility, bioavailability
Procedia PDF Downloads 3121090 Developing and Shake Table Testing of Semi-Active Hydraulic Damper as Active Interaction Control Device
Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung
Abstract:
Semi-active control system for structure under excitation of earthquake provides with the characteristics of being adaptable and requiring low energy. DSHD (Displacement Semi-Active Hydraulic Damper) was developed by our research team. Shake table test results of this DSHD installed in full scale test structure demonstrated that this device brought its energy-dissipating performance into full play for test structure under excitation of earthquake. The objective of this research is to develop a new AIC (Active Interaction Control Device) and apply shake table test to perform its dissipation of energy capability. This new proposed AIC is converting an improved DSHD (Displacement Semi-Active Hydraulic Damper) to AIC with the addition of an accumulator. The main concept of this energy-dissipating AIC is to apply the interaction function of affiliated structure (sub-structure) and protected structure (main structure) to transfer the input seismic force into sub-structure to reduce the structural deformation of main structure. This concept is tested using full-scale multi-degree of freedoms test structure, installed with this proposed AIC subjected to external forces of various magnitudes, for examining the shock absorption influence of predictive control, stiffness of sub-structure, synchronous control, non-synchronous control and insufficient control position. The test results confirm: (1) this developed device is capable of diminishing the structural displacement and acceleration response effectively; (2) the shock absorption of low precision of semi-active control method did twice as much seismic proof efficacy as that of passive control method; (3) active control method may not exert a negative influence of amplifying acceleration response of structure; (4) this AIC comes into being time-delay problem. It is the same problem of ordinary active control method. The proposed predictive control method can overcome this defect; (5) condition switch is an important characteristics of control type. The test results show that synchronism control is very easy to control and avoid stirring high frequency response. This laboratory results confirm that the device developed in this research is capable of applying the mutual interaction between the subordinate structure and the main structure to be protected is capable of transforming the quake energy applied to the main structure to the subordinate structure so that the objective of minimizing the deformation of main structural can be achieved.Keywords: DSHD (Displacement Semi-Active Hydraulic Damper), AIC (Active Interaction Control Device), shake table test, full scale structure test, sub-structure, main-structure
Procedia PDF Downloads 5191089 Predicting Suicidal Behavior by an Accurate Monitoring of RNA Editing Biomarkers in Blood Samples
Authors: Berengere Vire, Nicolas Salvetat, Yoann Lannay, Guillaume Marcellin, Siem Van Der Laan, Franck Molina, Dinah Weissmann
Abstract:
Predicting suicidal behaviors is one of the most complex challenges of daily psychiatric practices. Today, suicide risk prediction using biological tools is not validated and is only based on subjective clinical reports of the at-risk individual. Therefore, there is a great need to identify biomarkers that would allow early identification of individuals at risk of suicide. Alterations of adenosine-to-inosine (A-to-I) RNA editing of neurotransmitter receptors and other proteins have been shown to be involved in etiology of different psychiatric disorders and linked to suicidal behavior. RNA editing is a co- or post-transcriptional process leading to a site-specific alteration in RNA sequences. It plays an important role in the epi transcriptomic regulation of RNA metabolism. On postmortem human brain tissue (prefrontal cortex) of depressed suicide victims, Alcediag found specific alterations of RNA editing activity on the mRNA coding for the serotonin 2C receptor (5-HT2cR). Additionally, an increase in expression levels of ADARs, the RNA editing enzymes, and modifications of RNA editing profiles of prime targets, such as phosphodiesterase 8A (PDE8A) mRNA, have also been observed. Interestingly, the PDE8A gene is located on chromosome 15q25.3, a genomic region that has recurrently been associated with the early-onset major depressive disorder (MDD). In the current study, we examined whether modifications in RNA editing profile of prime targets allow identifying disease-relevant blood biomarkers and evaluating suicide risk in patients. To address this question, we performed a clinical study to identify an RNA editing signature in blood of depressed patients with and without the history of suicide attempts. Patient’s samples were drawn in PAXgene tubes and analyzed on Alcediag’s proprietary RNA editing platform using next generation sequencing technology. In addition, gene expression analysis by quantitative PCR was performed. We generated a multivariate algorithm comprising various selected biomarkers to detect patients with a high risk to attempt suicide. We evaluated the diagnostic performance using the relative proportion of PDE8A mRNA editing at different sites and/or isoforms as well as the expression of PDE8A and the ADARs. The significance of these biomarkers for suicidality was evaluated using the area under the receiver-operating characteristic curve (AUC). The generated algorithm comprising the biomarkers was found to have strong diagnostic performances with high specificity and sensitivity. In conclusion, we developed tools to measure disease-specific biomarkers in blood samples of patients for identifying individuals at the greatest risk for future suicide attempts. This technology not only fosters patient management but is also suitable to predict the risk of drug-induced psychiatric side effects such as iatrogenic increase of suicidal ideas/behaviors.Keywords: blood biomarker, next-generation-sequencing, RNA editing, suicide
Procedia PDF Downloads 2591088 A Patient-Centered Approach to Clinical Trial Development: Real-World Evidence from a Canadian Medical Cannabis Clinic
Authors: Lucile Rapin, Cynthia El Hage, Rihab Gamaoun, Maria-Fernanda Arboleda, Erin Prosk
Abstract:
Introduction: Sante Cannabis (SC), a Canadian group of clinics dedicated to medical cannabis, based in Montreal and in the province of Quebec, has served more than 8000 patients seeking cannabis-based treatment over the past five years. As randomized clinical trials with natural medical cannabis are scarce, real-world evidence offers the opportunity to fill research gaps between scientific evidence and clinical practice. Data on the use of medical cannabis products from SC patients were prospectively collected, leading to a large real-world database on the use of medical cannabis. The aim of this study was to report information on the profiles of both patients and prescribed medical cannabis products at SC clinics, and to assess the safety of medical cannabis among Canadian patients. Methods: This is an observational retrospective study of 1342 adult patients who were authorized with medical cannabis products between October 2017 and September 2019. Information regarding demographic characteristics, therapeutic indications for medical cannabis use, patterns in dosing and dosage form of medical cannabis and adverse effects over one-year follow-up (initial and 4 follow-up (FUP) visits) were collected. Results: 59% of SC patients were female, with a mean age of 56.7 (SD= 15.6, range= (19-97)). Cannabis products were authorized mainly for patients with a diagnosis of chronic pain (68.8% of patients), cancer (6.7%), neurological disorders (5.6%), and mood disorders (5.4 %). At initial visit, a large majority (70%) of patients were authorized exclusively medical cannabis products, 27% were authorized a combination of pharmaceutical cannabinoids and medical cannabis and 3% were prescribed only pharmaceutical cannabinoids. This pattern was recurrent over the one-year follow-up. Overall, oil was the preferred formulation (average over visits 72.5%) followed by a combination of oil and dry (average 19%), other routes of administration accounted for less than 4%. Patients were predominantly prescribed products with a balanced THC:CBD ratio (59%-75% across visits). 28% of patients reported at least one adverse effect (AE) at the 3-month follow-up visit and 12% at the six-month FUP visit. 84.8% of total AEs were mild and transient. No serious AE was reported. Overall, the most common side effects reported were dizziness (11.95% of total AEs), drowsiness (11.4%), dry mouth (5.5%), nausea (4.8%), headaches (4.6%), cough (4.4%), anxiety (4.1%) and euphoria (3.5%). Other adverse effects accounted for less than 3% of total AE. Conclusion: Our results confirm that the primary area of clinical use for medical cannabis is in pain management. Patients in this cohort are largely utilizing plant-based cannabis oil products with a balanced ratio of THC:CBD. Reported adverse effects were mild and included dizziness and drowsiness. This real-world data confirms the tolerable safety profile of medical cannabis and suggests medical indications not yet validated in controlled clinical trials. Such data offers an important opportunity for the investigation of the long-term effects of cannabinoid exposure in real-life conditions. Real-world evidence can be used to direct clinical trial research efforts on specific indications and dosing patterns for product development.Keywords: medical cannabis, safety, real-world data, Canada
Procedia PDF Downloads 1341087 Production of Ferroboron by SHS-Metallurgy from Iron-Containing Rolled Production Wastes for Alloying of Cast Iron
Authors: G. Zakharov, Z. Aslamazashvili, M. Chikhradze, D. Kvaskhvadze, N. Khidasheli, S. Gvazava
Abstract:
Traditional technologies for processing iron-containing industrial waste, including steel-rolling production, are associated with significant energy costs, the long duration of processes, and the need to use complex and expensive equipment. Waste generated during the industrial process negatively affects the environment, but at the same time, it is a valuable raw material and can be used to produce new marketable products. The study of the effectiveness of self-propagating high-temperature synthesis (SHS) methods, which are characterized by the simplicity of the necessary equipment, the purity of the final product, and the high processing speed, is under the wide scientific and practical interest to solve the set problem. The work presents technological aspects of the production of Ferro boron by the method of SHS - metallurgy from iron-containing wastes of rolled production for alloying of cast iron and results of the effect of alloying element on the degree of boron assimilation with liquid cast iron. Features of Fe-B system combustion have been investigated, and the main parameters to control the phase composition of synthesis products have been experimentally established. Effect of overloads on patterns of cast ligatures formation and mechanisms structure formation of SHS products was studied. It has been shown that an increase in the content of hematite Fe₂O₃ in iron-containing waste leads to an increase in the content of phase FeB and, accordingly, the amount of boron in the ligature. Boron content in ligature is within 3-14%, and the phase composition of obtained ligatures consists of Fe₂B and FeB phases. Depending on the initial composition of the wastes, the yield of the end product reaches 91 - 94%, and the extraction of boron is 70 - 88%. Combustion processes of high exothermic mixtures allow to obtain a wide range of boron-containing ligatures from industrial wastes. In view of the relatively low melting point of the obtained SHS-ligature, the positive dynamics of boron absorption by liquid iron is established. According to the obtained data, the degree of absorption of the ligature by alloying gray cast iron at 1450°C is 80-85%. When combined with the treatment of liquid cast iron with magnesium, followed by alloying with the developed ligature, boron losses are reduced by 5-7%. At that, uniform distribution of boron micro-additives in the volume of treated liquid metal is provided. Acknowledgment: This work was supported by Shota Rustaveli Georgian National Science Foundation of Georgia (SRGNSFG) under the GENIE project (grant number № CARYS-19-802).Keywords: self-propagating high-temperature synthesis, cast iron, industrial waste, ductile iron, structure formation
Procedia PDF Downloads 1231086 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra
Authors: Amin Asgarian, Ghyslaine McClure
Abstract:
Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design
Procedia PDF Downloads 2381085 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.Keywords: marine sedimentology, seabed map, sediment classification, world ocean
Procedia PDF Downloads 2321084 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues
Authors: Barna Arnold Keserű
Abstract:
In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.Keywords: artificial intelligence, intellectual property, liability, robotics
Procedia PDF Downloads 2061083 Use of Sewage Sludge Ash as Partial Cement Replacement in the Production of Mortars
Authors: Domagoj Nakic, Drazen Vouk, Nina Stirmer, Mario Siljeg, Ana Baricevic
Abstract:
Wastewater treatment processes generate significant quantities of sewage sludge that need to be adequately treated and disposed. In many EU countries, the problem of adequate disposal of sewage sludge has not been solved, nor is determined by the unique rules, instructions or guidelines. Disposal of sewage sludge is important not only in terms of satisfying the regulations, but the aspect of choosing the optimal wastewater and sludge treatment technology. Among the solutions that seem reasonable, recycling of sewage sludge and its byproducts reaches the top recommendation. Within the framework of sustainable development, recycling of sludge almost completely closes the cycle of wastewater treatment in which only negligible amounts of waste that requires landfilling are being generated. In many EU countries, significant amounts of sewage sludge are incinerated, resulting in a new byproduct in the form of ash. Sewage sludge ash is three to five times less in volume compared to stabilized and dehydrated sludge, but it also requires further management. The combustion process also destroys hazardous organic components in the sludge and minimizes unpleasant odors. The basic objective of the presented research is to explore the possibilities of recycling of the sewage sludge ash as a supplementary cementitious material. This is because of the main oxides present in the sewage sludge ash (SiO2, Al2O3 and Cao, which is similar to cement), so it can be considered as latent hydraulic and pozzolanic material. Physical and chemical characteristics of ashes, generated by sludge collected from different wastewater treatment plants, and incinerated in laboratory conditions at different temperatures, are investigated since it is a prerequisite of its subsequent recycling and the eventual use in other industries. Research was carried out by replacing up to 20% of cement by mass in cement mortar mixes with different obtained ashes and examining characteristics of created mixes in fresh and hardened condition. The mixtures with the highest ash content (20%) showed an average drop in workability of about 15% which is attributed to the increased water requirements when ash was used. Although some mixes containing added ash showed compressive and flexural strengths equivalent to those of reference mixes, generally slight decrease in strength was observed. However, it is important to point out that the compressive strengths always remained above 85% compared to the reference mix, while flexural strengths remained above 75%. Ecological impact of innovative construction products containing sewage sludge ash was determined by analyzing leaching concentrations of heavy metals. Results demonstrate that sewage sludge ash can satisfy technical and environmental criteria for use in cementitious materials which represents a new recycling application for an increasingly important waste material that is normally landfilled. Particular emphasis is placed on linking the composition of generated ashes depending on its origin and applied treatment processes (stage of wastewater treatment, sludge treatment technology, incineration temperature) with the characteristics of the final products. Acknowledgement: This work has been fully supported by Croatian Science Foundation under the project '7927 - Reuse of sewage sludge in concrete industry – from infrastructure to innovative construction products'.Keywords: cement mortar, recycling, sewage sludge ash, sludge disposal
Procedia PDF Downloads 2481082 Bioinformatic Strategies for the Production of Glycoproteins in Algae
Authors: Fadi Saleh, Çığdem Sezer Zhmurov
Abstract:
Biopharmaceuticals represent one of the wildest developing fields within biotechnology, and the biological macromolecules being produced inside cells have a variety of applications for therapies. In the past, mammalian cells, especially CHO cells, have been employed in the production of biopharmaceuticals. This is because these cells can achieve human-like completion of PTM. These systems, however, carry apparent disadvantages like high production costs, vulnerability to contamination, and limitations in scalability. This research is focused on the utilization of microalgae as a bioreactor system for the synthesis of biopharmaceutical glycoproteins in relation to PTMs, particularly N-glycosylation. The research points to a growing interest in microalgae as a potential substitute for more conventional expression systems. A number of advantages exist in the use of microalgae, including rapid growth rates, the lack of common human pathogens, controlled scalability in bioreactors, and the ability of some PTMs to take place. Thus, the potential of microalgae to produce recombinant proteins with favorable characteristics makes this a promising platform in order to produce biopharmaceuticals. The study focuses on the examination of the N-glycosylation pathways across different species of microalgae. This investigation is important as N-glycosylation—the process by which carbohydrate groups are linked to proteins—profoundly influences the stability, activity, and general performance of glycoproteins. Additionally, bioinformatics methodologies are employed to explain the genetic pathways implicated in N-glycosylation within microalgae, with the intention of modifying these organisms to produce glycoproteins suitable for human consumption. In this way, the present comparative analysis of the N-glycosylation pathway in humans and microalgae can be used to bridge both systems in order to produce biopharmaceuticals with humanized glycosylation profiles within the microalgal organisms. The results of the research underline microalgae's potential to help improve some of the limitations associated with traditional biopharmaceutical production systems. The study may help in the creation of a cost-effective and scale-up means of producing quality biopharmaceuticals by modifying microalgae genetically to produce glycoproteins with N-glycosylation that is compatible with humans. Improvements in effectiveness will benefit biopharmaceutical production and the biopharmaceutical sector with this novel, green, and efficient expression platform. This thesis, therefore, is thorough research into the viability of microalgae as an efficient platform for producing biopharmaceutical glycoproteins. Based on the in-depth bioinformatic analysis of microalgal N-glycosylation pathways, a platform for their engineering to produce human-compatible glycoproteins is set out in this work. The findings obtained in this research will have significant implications for the biopharmaceutical industry by opening up a new way of developing safer, more efficient, and economically more feasible biopharmaceutical manufacturing platforms.Keywords: microalgae, glycoproteins, post-translational modification, genome
Procedia PDF Downloads 291081 Autophagy in the Midgut Epithelium of Spodoptera exigua Hübner (Lepidoptera: Noctuidae) Larvae Exposed to Various Cadmium Concentration - 6-Generational Exposure
Authors: Magdalena Maria Rost-Roszkowska, Alina Chachulska-Żymełka, Monika Tarnawska, Maria Augustyniak, Alina Kafel, Agnieszka Babczyńska
Abstract:
Autophagy is a form of cell remodeling in which an internalization of organelles into vacuoles that are called autophagosomes occur. Autophagosomes are the targets of lysosomes, thus causing digestion of cytoplasmic components. Eventually, it can lead to the death of the entire cell. However, in response to several stress factors, e.g., starvation, heavy metals (e.g., cadmium) autophagy can also act as a pro-survival factor, protecting the cell against its death. The main aim of our studies was to check if the process of autophagy, which could appear in the midgut epithelium after Cd treatment, can be fixed during the following generations of insects. As a model animal, we chose the beet armyworm Spodoptera exigua Hübner (Lepidoptera: Noctuidae), a well-known polyphagous pest of many vegetable crops. We analyzed specimens at final larval stage (5th larval stage), due to its hyperfagy, resulting in great amount of cadmium assimilate. The culture consisted of two strains: a control strain (K) fed a standard diet, and a cadmium strain (Cd), fed on standard diet supplemented with cadmium (44 mg Cd per kg of dry weight of food) for 146 generations, both strains. In addition, the control insects were transferred to the Cd supplemented diet (5 mg Cd per kg of dry weight of food, 10 mg Cd per kg of dry weight of food, 20 mg Cd per kg of dry weight of food, 44 mg Cd per kg of dry weight of food). Therefore, we obtained Cd1, Cd2, Cd3 and KCd experimental groups. Autophagy has been examined using transmission electron microscope. During this process, degenerated organelles were surrounded by a membranous phagophore and enclosed in an autophagosome. Eventually, after the autophagosome fused with a lysosome, an autolysosome was formed and the process of the digestion of organelles began. During the 1st year of the experiment, we analyzed specimens of 6 generations in all the lines. The intensity of autophagy depends significantly on the generation, tissue and cadmium concentration in the insect rearing medium. In the Ist, IInd, IIIrd, IVth, Vth and VIth generation the intensity of autophagy in the midguts from cadmium-exposed strains decreased gradually according to the following order of strains: Cd1, Cd2, Cd3 and KCd. The higher amount of cells with autophagy was observed in Cd1 and Cd2. However, it was still higher than the percentage of cells with autophagy in the same tissues of the insects from the control and multigenerational cadmium strain. This may indicate that during 6-generational exposure to various Cd concentration, a preserved tolerance to cadmium was not maintained. The study has been financed by the National Science Centre Poland, grant no 2016/21/B/NZ8/00831.Keywords: autophagy, cell death, digestive system, ultrastructure
Procedia PDF Downloads 2331080 Architectural Design as Knowledge Production: A Comparative Science and Technology Study of Design Teaching and Research at Different Architecture Schools
Authors: Kim Norgaard Helmersen, Jan Silberberger
Abstract:
Questions of style and reproducibility in relation to architectural design are not only continuously debated; the very concepts can seem quite provocative to architects, who like to think of architectural design as depending on intuition, ideas, and individual personalities. This standpoint - dominant in architectural discourse - is challenged in the present paper presenting early findings from a comparative STS-inspired research study of architectural design teaching and research at different architecture schools in varying national contexts. In philosophy of science framework, the paper reflects empirical observations of design teaching at the Royal Academy of Fine Arts in Copenhagen and presents a tentative theoretical framework for the on-going research project. The framework suggests that architecture – as a field of knowledge production – is mainly dominated by three epistemological positions, which will be presented and discussed. Besides serving as a loosely structured framework for future data analysis, the proposed framework brings forth the argument that architecture can be roughly divided into different schools of thought, like the traditional science disciplines. Without reducing the complexity of the discipline, describing its main intellectual positions should prove fruitful for the future development of architecture as a theoretical discipline, moving an architectural critique beyond discussions of taste preferences. Unlike traditional science disciplines, there is a lack of a community-wide, shared pool of codified references in architecture, with architects instead referencing art projects, buildings, and famous architects, when positioning their standpoints. While these inscriptions work as an architectural reference system, to be compared to codified theories in academic writing of traditional research, they are not used systematically in the same way. As a result, architectural critique is often reduced to discussions of taste and subjectivity rather than epistemological positioning. Architects are often criticized as judges of taste and accused that their rationality is rooted in cultural-relative aesthetical concepts of taste closely linked to questions of style, but arguably their supposedly subjective reasoning, in fact, forms part of larger systems of thought. Putting architectural ‘styles’ under a loop, and tracing their philosophical roots, can potentially open up a black box in architectural theory. Besides ascertaining and recognizing the existence of specific ‘styles’ and thereby schools of thought in current architectural discourse, the study could potentially also point at some mutations of the conventional – something actually ‘new’ – of potentially high value for architectural design education.Keywords: architectural theory, design research, science and technology studies (STS), sociology of architecture
Procedia PDF Downloads 1301079 Pulsed-Wave Doppler Ultrasonographic Assessment of the Maximum Blood Velocity in Common Carotid Artery in Horses after Administration of Ketamine and Acepromazine
Authors: Saman Ahani, Aboozar Dehghan, Roham Vali, Hamid Salehian, Amin Ebrahimi
Abstract:
Pulsed-wave (PW) doppler ultrasonography is a non-invasive, relatively accurate imaging technique that can measure blood speed. The imaging could be obtained via the common carotid artery, as one of the main vessels supplying the blood of vital organs. In horses, factors such as susceptibility to depression of the cardiovascular system and their large muscular mass have rendered them vulnerable to changes in blood speed. One of the most important factors causing blood velocity changes is the administration of anesthetic drugs, including Ketamine and Acepromazine. Thus, in this study, the Pulsed-wave doppler technique was performed to assess the highest blood velocity in the common carotid artery following administration of Ketamine and Acepromazine. Six male and six female healthy Kurdish horses weighing 351 ± 46 kg (mean ± SD) and aged 9.2 ± 1.7 years (mean ± SD) were housed under animal welfare guidelines. After fasting for six hours, the normal blood flow velocity in the common carotid artery was measured using a Pulsed-wave doppler ultrasonography machine (BK Medical, Denmark), and a high-frequency linear transducer (12 MHz) without applying any sedative drugs as a control group. The same procedure was repeated after each individual received the following medications: 1.1, 2.2 mg/kg Ketamine (Pfizer, USA), and 0.5, 1 mg/kg Acepromizine (RACEHORSE MEDS, Ukraine), with an interval of 21 days between the administration of each dose and/or drug. The ultrasonographic study was done five (T5) and fifteen (T15) minutes after injecting each dose intravenously. Lastly, the statistical analysis was performed using SPSS software version 22 for Windows and a P value less than 0.05 was considered to be statistically significant. Five minutes after administration of Ketamine (1.1, 2.2 mg/kg) in both male and female horses, the blood velocity decreased to 38.44, 34.53 cm/s in males, and 39.06, 34.10 cm/s in females in comparison to the control group (39.59 and 40.39 cm/s in males and females respectively) while administration of 0.5 mg/kg Acepromazine led to a significant rise (73.15 and 55.80 cm/s in males and females respectively) (p<0.05). It means that the most drastic change in blood velocity, regardless of gender, refers to the latter dose/drug. In both medications and both genders, the increase in doses led to a decrease in blood velocity compared to the lower dose of the same drug. In all experiments in this study, the blood velocity approached its normal value at T15. In another study comparing the blood velocity changes affected by Ketamine and Acepromazine through femoral arteries, the most drastic changes were attributed to Ketamine; however, in this experiment, the maximum blood velocity was observed following administration of Acepromazine via the common carotid artery. Therefore, further experiments using the same medications are suggested using Pulsed-wave doppler measuring the blood velocity changes in both femoral and common carotid arteries simultaneously.Keywords: Acepromazine, common carotid artery, horse, ketamine, pulsed-wave doppler ultrasonography
Procedia PDF Downloads 1281078 Nonlinear Response of Tall Reinforced Concrete Shear Wall Buildings under Wind Loads
Authors: Mahtab Abdollahi Sarvi, Siamak Epackachi, Ali Imanpour
Abstract:
Reinforced concrete shear walls are commonly used as the lateral load-resisting system of mid- to high-rise office or residential buildings around the world. Design of such systems is often governed by wind rather than seismic effects, in particular in low-to-moderate seismic regions. The current design philosophy as per the majority of building codes under wind loads require elastic response of lateral load-resisting systems including reinforced concrete shear walls when subjected to the rare design wind load, resulting in significantly large wall sections needed to meet strength requirements and drift limits. The latter can highly influence the design in upper stories due to stringent drift limits specified by building codes, leading to substantial added costs to the construction of the wall. However, such walls may offer limited to moderate over-strength and ductility due to their large reserve capacity provided that they are designed and detailed to appropriately develop such over-strength and ductility under extreme wind loads. This would significantly contribute to reducing construction time and costs, while maintaining structural integrity under gravity and frequently-occurring and less frequent wind events. This paper aims to investigate the over-strength and ductility capacity of several imaginary office buildings located in Edmonton, Canada with a glance at earthquake design philosophy. Selected models are 10- to 25-story buildings with three types of reinforced concrete shear wall configurations including rectangular, barbell, and flanged. The buildings are designed according to National Building Code of Canada. Then fiber-based numerical models of the walls are developed in Perform 3D and by conducting nonlinear static (pushover) analysis, lateral nonlinear behavior of the walls are evaluated. Ductility and over-strength of the structures are obtained based on the results of the pushover analyses. The results confirmed moderate nonlinear capacity of reinforced concrete shear walls under extreme wind loads. This is while lateral displacements of the walls pass the serviceability limit states defined in Pre standard for Performance-Based Wind Design (ASCE). The results indicate that we can benefit the limited nonlinear response observed in the reinforced concrete shear walls to economize the design of such systems under wind loads.Keywords: concrete shear wall, high-rise buildings, nonlinear static analysis, response modification factor, wind load
Procedia PDF Downloads 1071077 Collateral Impact of Water Resources Development in an Arsenic Affected Village of Patna District
Authors: Asrarul H. Jeelani
Abstract:
Arsenic contamination of groundwater and its’ health implications in lower Gangetic plain of Indian states started reporting in the 1980s. The same period was declared as the first water decade (1981-1990) to achieve ‘water for all.’ To fulfill the aim, the Indian government, with the support of international agencies installed millions of hand-pumps through water resources development programs. The hand-pumps improve the accessibility if the groundwater, but over-extraction of it increases the chances of mixing of trivalent arsenic which is more toxic than pentavalent arsenic of dug well water in Gangetic plain and has different physical manifestations. Now after three decades, Bihar (middle Gangetic plain) is also facing arsenic contamination of groundwater and its’ health implications. Objective: This interdisciplinary research attempts to understand the health and social implications of arsenicosis among different castes in Haldi Chhapra village and to find the association of ramifications with water resources development. Methodology: The Study used concurrent quantitative dominant mix method (QUAN+qual). The researcher had employed household survey, social mapping, interviews, and participatory interactions. However, the researcher used secondary data for retrospective analysis of hand-pumps and implications of arsenicosis. Findings: The study found 88.5% (115) household have hand-pumps as a source of water however 13.8% uses purified supplied water bottle and 3.6% uses combinations of hand-pump, bottled water and dug well water for drinking purposes. Among the population, 3.65% of individuals have arsenicosis, and 2.72% of children between the age group of 5 to 15 years are affected. The caste variable has also emerged through quantitative as well as geophysical locations analysis as 5.44% of arsenicosis manifested individual belong to scheduled caste (SC), 3.89% to extremely backward caste (EBC), 2.57% to backward caste (BC) and 3% to other. Among three clusters of arsenic poisoned locations, two belong to SC and EBC. The village as arsenic affected is being discriminated, whereas the affected individual is also facing discrimination, isolation, stigma, and problem in getting married. The forceful intervention to install hand-pumps in the first water decades and later restructuring of the dug well destroyed a conventional method of dug well cleaning. Conclusion: The common manifestation of arsenicosis has increased by 1.3% within six years of span in the village. This raised the need for setting up a proper surveillance system in the village. It is imperative to consider the social structure for arsenic mitigation program as this research reveals caste as a significant factor. The health and social implications found in the study; retrospectively analyzed as the collateral impact of water resource development programs in the village.Keywords: arsenicosis, caste, collateral impact, water resources
Procedia PDF Downloads 1091076 The SHIFT of Consumer Behavior from Fast Fashion to Slow Fashion: A Review and Research Agenda
Authors: Priya Nangia, Sanchita Bansal
Abstract:
As fashion cycles become more rapid, some segments of the fashion industry have adopted increasingly unsustainable production processes to keep up with demand and enhance profit margins. The growing threat to environmental and social wellbeing posed by unethical fast fashion practices and the need to integrate the targets of SDGs into this industry necessitates a shift in the fashion industry's unsustainable nature, which can only be accomplished in the long run if consumers support sustainable fashion by purchasing it. Fast fashion is defined as low-cost, trendy apparel that takes inspiration from the catwalk or celebrity culture and rapidly transforms it into garments at high-street stores to meet consumer demand. Given the importance of identity formation to many consumers, the desire to be “fashionable” often outweighs the desire to be ethical or sustainable. This paradox exemplifies the tension between the human drive to consume and the will to do so in moderation. Previous research suggests that there is an attitude-behavior gap when it comes to determining consumer purchasing behavior, but to the best of our knowledge, no study has analysed how to encourage customers to shift from fast to slow fashion. Against this backdrop, the aim of this study is twofold: first, to identify and examine the factors that impact consumers' decisions to engage in sustainable fashion, and second, the authors develop a comprehensive framework for conceptualizing and encouraging researchers and practitioners to foster sustainable consumer behavior. This study used a systematic approach to collect data and analyse literature. The approach included three key steps: review planning, review execution, and findings reporting. Authors identified the keywords “sustainable consumption” and “sustainable fashion” and retrieved studies from the Web of Science (WoS) (126 records) and Scopus database (449 records). To make the study more specific, the authors refined the subject area to management, business, and economics in the second step, retrieving 265 records. In the third step, the authors removed the duplicate records and manually reviewed the articles to examine their relevance to the research issue. The final 96 research articles were used to develop this study's systematic scheme. The findings indicate that societal norms, demographics, positive emotions, self-efficacy, and awareness all have an effect on customers' decisions to purchase sustainable apparel. The authors propose a framework, denoted by the acronym SHIFT, in which consumers are more likely to engage in sustainable behaviors when the message or context leverages the following factors: (s)social influence, (h)habit formation, (i)individual self, (f)feelings, emotions, and cognition, and (t)tangibility. Furthermore, the authors identify five broad challenges that encourage sustainable consumer behavior and use them to develop novel propositions. Finally, the authors discuss how the SHIFT framework can be used in practice to drive sustainable consumer behaviors. This research sought to define the boundaries of existing research while also providing new perspectives on future research, with the goal of being useful for the development and discovery of new fields of study, thereby expanding knowledge.Keywords: consumer behavior, fast fashion, sustainable consumption, sustainable fashion, systematic literature review
Procedia PDF Downloads 911075 Convention Refugees in New Zealand: Being Trapped in Immigration Limbo without the Right to Obtain a Visa
Authors: Saska Alexandria Hayes
Abstract:
Multiple Convention Refugees in New Zealand are stuck in a state of immigration limbo due to a lack of defined immigration policies. The Refugee Convention of 1951 does not give the right to be issued a permanent right to live and work in the country of asylum. A gap in New Zealand's immigration law and policy has left Convention Refugees without the right to obtain a resident or temporary entry visa. The significant lack of literature on this topic suggests that the lack of visa options for Convention Refugees in New Zealand is a widely unknown or unacknowledged issue. Refugees in New Zealand enjoy the right of non-refoulement contained in Article 33 of the Refugee Convention 1951, whether lawful or unlawful. However, a number of rights contained in the Refugee Convention 1951, such as the right to gainful employment and social security, are limited to refugees who maintain lawful immigration status. If a Convention Refugee is denied a resident visa, the only temporary entry visa a Convention Refugee can apply for in New Zealand is discretionary. The appeal cases heard at the Immigration Protection Tribunal establish that Immigration New Zealand has declined resident and discretionary temporary entry visa applications by Convention Refugees for failing to meet the health or character immigration instructions. The inability of a Convention Refugee to gain residency in New Zealand creates a dependence on the issue of discretionary temporary entry visas to maintain lawful status. The appeal cases record that this reliance has led to Convention Refugees' lawful immigration status being in question, temporarily depriving them of the rights contained in the Refugee Convention 1951 of lawful refugees. In one case, the process of applying for a discretionary temporary entry visa led to a lawful Convention Refugee being temporarily deprived of the right to social security, breaching Article 24 of the Refugee Convention 1951. The judiciary has stated a constant reliance on the issue of discretionary temporary entry visas for Convention Refugees can lead to a breach of New Zealand's international obligations under Article 7 of the International Covenant on Civil and Political Rights. The appeal cases suggest that, despite successful judicial proceedings, at least three persons have been made to rely on the issue of discretionary temporary entry visas potentially indefinitely. The appeal cases establish that a Convention Refugee can be denied a discretionary temporary entry visa and become unlawful. Unlawful status could ultimately breach New Zealand's obligations under Article 33 of the Refugee Convention 1951 as it would procedurally deny Convention Refugees asylum. It would force them to choose between the right of non-refoulement or leaving New Zealand to seek the ability to access all the human rights contained in the Universal Declaration of Human Rights elsewhere. This paper discusses how the current system has given rise to these breaches and emphasizes a need to create a designated temporary entry visa category for Convention Refugees.Keywords: domestic policy, immigration, migration, New Zealand
Procedia PDF Downloads 1041074 A Study on Aquatic Bycatch Mortality Estimation Due to Prawn Seed Collection and Alteration of Collection Method through Sustainable Practices in Selected Areas of Sundarban Biosphere Reserve (SBR), India
Authors: Samrat Paul, Satyajit Pahari, Krishnendu Basak, Amitava Roy
Abstract:
Fishing is one of the pivotal livelihood activities, especially in developing countries. Today it is considered an important occupation for human society from the era of human settlement began. In simple terms, non-target catches of any species during fishing can be considered as ‘bycatch,’ and fishing bycatch is neither a new fishery management issue nor a new problem. Sundarban is one of the world’s largest mangrove land expanding up to 10,200 sq. km in India and Bangladesh. This largest mangrove biome resource is used by the local inhabitants commercially to run their livelihood, especially by forest fringe villagers (FFVs). In Sundarban, over-fishing, especially post larvae collection of wild Penaeus monodon, is one of the major concerns, as during the collection of P. monodon, different aquatic species are destroyed as a result of bycatch mortality which changes in productivity and may negatively impact entire biodiversity, of the ecosystem. Wild prawn seed collection gear like a small mesh sized net poses a serious threat to aquatic stocks, where the collection isn’t only limited to prawn seed larvae. As prawn seed collection processes are inexpensive, require less monetary investment, and are lucrative; people are easily engaged here as their source of income. Wildlife Trust of India’s (WTI) intervention in selected forest fringe villages of Sundarban Tiger Reserve (STR) was to estimate and reduce the mortality of aquatic bycatches by involving local communities in newly developed release method and their time engagement in prawn seed collection (PSC) by involving them in Alternate Income Generation (AIG). The study was conducted for their taxonomic identification during the period of March to October 2019. Collected samples were preserved in 70% ethyl alcohol for identification, and all the preserved bycatch samples were identified morphologically by the expertise of the Zoological Survey of India (ZSI), Kolkata. Around 74 different aquatic species, where 11 different species are molluscs, 41 fish species, out of which 31 species were identified, and 22 species of crustacean collected, out of which 18 species were identified. Around 13 different species belong to a different order, and families were unable to identify them morphologically as they were collected in the juvenile stage. The study reveals that for collecting one single prawn seed, eight individual life of associated faunas are being lost. Zero bycatch mortality is not practical; rather, collectors should focus on bycatch reduction by avoiding capturing, allowing escaping, and mortality reduction, and must make changes in their fishing method by increasing net mesh size, which will avoid non-target captures. But as the prawns are small in size (generally 1-1.5 inches in length), thus increase net size making economically less or no profit for collectors if they do so. In this case, returning bycatches is considered one of the best ways to a reduction in bycatch mortality which is a more sustainable practice.Keywords: bycatch mortality, biodiversity, mangrove biome resource, sustainable practice, Alternate Income Generation (AIG)
Procedia PDF Downloads 1531073 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 1701072 Life Cycle Assessment to Study the Acidification and Eutrophication Impacts of Sweet Cherry Production
Authors: G. Bravo, D. Lopez, A. Iriarte
Abstract:
Several organizations and governments have created a demand for information about the environmental impacts of agricultural products. Today, the export oriented fruit sector in Chile is being challenged to quantify and reduce their environmental impacts. Chile is the largest southern hemisphere producer and exporter of sweet cherry fruit. Chilean sweet cherry production reached a volume of 80,000 tons in 2012. The main destination market for the Chilean cherry in 2012 was Asia (including Hong Kong and China), taking in 69% of exported volume. Another important market was the United States with 16% participation, followed by Latin America (7%) and Europe (6%). Concerning geographical distribution, the Chilean conventional cherry production is focused in the center-south area, between the regions of Maule and O’Higgins; both regions represent 81% of the planted surface. The Life Cycle Assessment (LCA) is widely accepted as one of the major methodologies for assessing environmental impacts of products or services. The LCA identifies the material, energy, material, and waste flows of a product or service, and their impact on the environment. There are scant studies that examine the impacts of sweet cherry cultivation, such as acidification and eutrophication. Within this context, the main objective of this study is to evaluate, using the LCA, the acidification and eutrophication impacts of sweet cherry production in Chile. The additional objective is to identify the agricultural inputs that contributed significantly to the impacts of this fruit. The system under study included all the life cycle stages from the cradle to the farm gate (harvested sweet cherry). The data of sweet cherry production correspond to nationwide representative practices and are based on technical-economic studies and field information obtained in several face-to-face interviews. The study takes into account the following agricultural inputs: fertilizers, pesticides, diesel consumption for agricultural operations, machinery and electricity for irrigation. The results indicated that the mineral fertilizers are the most important contributors to the acidification and eutrophication impacts of the sheet cherry cultivation. Improvement options are suggested for the hotspot in order to reduce the environmental impacts. The results allow planning and promoting low impacts procedures across fruit companies, as well as policymakers, and other stakeholders on the subject. In this context, this study is one of the first assessments of the environmental impacts of sweet cherry production. New field data or evaluation of other life cycle stages could further improve the knowledge on the impacts of this fruit. This study may contribute to environmental information in other countries where there is similar agricultural production for sweet cherry.Keywords: acidification, eutrophication, life cycle assessment, sweet cherry production
Procedia PDF Downloads 2711071 Methylphenidate Use by Canadian Children and Adolescents and the Associated Adverse Reactions
Authors: Ming-Dong Wang, Abigail F. Ruby, Michelle E. Ross
Abstract:
Methylphenidate is a first-line treatment drug for attention deficit hyperactivity disorder (ADHD), a common mental health disorder in children and adolescents. Over the last several decades, the rate of children and adolescents using ADHD medication has been increasing in many countries. A recent study found that the prevalence of ADHD medication use among children aged 3-18 years increased in 13 different world regions between 2001 and 2015, where the absolute increase ranged from 0.02 to 0.26% per year. The goal of this study was to examine the use of methylphenidate in Canadian children and its associated adverse reactions. Methylphenidate use information among young Canadians aged 0-14 years was extracted from IQVIA data on prescriptions dispensed by pharmacies between April 2014 and June 2020. The adverse reaction information associated with methylphenidate use was extracted from the Canada Vigilance database for the same time period. Methylphenidate use trends were analyzed based on sex, age group (0-4 years, 5-9 years, and 10-14 years), and geographical location (province). The common classes of adverse reactions associated with methylphenidate use were sorted, and the relative risks associated with methylphenidate use as compared with two second-line amphetamine medications for ADHD were estimated. This study revealed that among Canadians aged 0-14 years, every 100 people used about 25 prescriptions (or 23,000 mg) of methylphenidate per year during the study period, and the use increased with time. Boys used almost three times more methylphenidate than girls. The amount of drug used was inversely associated with age: Canadians aged 10-14 years used nearly three times as many drugs compared to those aged 5-9 years. Seasonal methylphenidate use patterns were apparent among young Canadians, but the seasonal trends differed among the three age groups. Methylphenidate use varied from region to region, and the highest methylphenidate use was observed in Quebec, where the use of methylphenidate was at least double that of any other province. During the study period, Health Canada received 304 adverse reaction reports associated with the use of methylphenidate for Canadians aged 0-14 years. The number of adverse reaction reports received for boys was 3.5 times higher than that for girls. The three most common adverse reaction classes were psychiatric disorders, nervous system disorders and injury, poisoning procedural complications. The number one commonly reported adverse reaction for boys was aggression (11.2%), while for girls, it was a tremor (9.6%). The safety profile in terms of adverse reaction classes associated with methylphenidate use was similar to that of the selected control products. Methylphenidate is a commonly used pharmaceutical product in young Canadians, particularly in the province of Quebec. Boys used approximately three times more of this product as compared to girls. Future investigation is needed to determine what factors are associated with the observed geographic variations in Canada.Keywords: adverse reaction risk, methylphenidate, prescription trend, use variation
Procedia PDF Downloads 1611070 Forensic Investigation: The Impact of Biometric-Based Solution in Combatting Mobile Fraud
Authors: Mokopane Charles Marakalala
Abstract:
Research shows that mobile fraud has grown exponentially in South Africa during the lockdown caused by the COVID-19 pandemic. According to the South African Banking Risk Information Centre (SABRIC), fraudulent online banking and transactions resulted in a sharp increase in cybercrime since the beginning of the lockdown, resulting in a huge loss to the banking industry in South Africa. While the Financial Intelligence Centre Act, 38 of 2001, regulate financial transactions, it is evident that criminals are making use of technology to their advantage. Money-laundering ranks among the major crimes, not only in South Africa but worldwide. This paper focuses on the impact of biometric-based solutions in combatting mobile fraud at the South African Risk Information. SABRIC had the challenges of a successful mobile fraud; cybercriminals could hijack a mobile device and use it to gain access to sensitive personal data and accounts. Cybercriminals are constantly looting the depths of cyberspace in search of victims to attack. Millions of people worldwide use online banking to do their regular bank-related transactions quickly and conveniently. This was supported by the SABRIC, who regularly highlighted incidents of mobile fraud, corruption, and maladministration in SABRIC, resulting in a lack of secure their banking online; they are vulnerable to falling prey to fraud scams such as mobile fraud. Criminals have made use of digital platforms since the development of technology. In 2017, 13 438 instances involving banking apps, internet banking, and mobile banking caused the sector to suffer gross losses of more than R250,000,000. The final three parties are forced to point fingers at one another while the fraudster makes off with the money. A non-probability sampling (purposive sampling) was used in selecting these participants. These included telephone calls and virtual interviews. The results indicate that there is a relationship between remote online banking and the increase in money-laundering as the system allows transactions to take place with limited verification processes. This paper highlights the significance of considering the development of prevention mechanisms, capacity development, and strategies for both financial institutions as well as law enforcement agencies in South Africa to reduce crime such as money-laundering. The researcher recommends that strategies to increase awareness for bank staff must be harnessed through the provision of requisite training and to be provided adequate training.Keywords: biometric-based solution, investigation, cybercrime, forensic investigation, fraud, combatting
Procedia PDF Downloads 1041069 Development of National Guidelines for Conducting Research and Development of Herbal Medicine in Thailand According to International Standards
Authors: Patcharaporn Sudchada, Nuntika Prommee
Abstract:
Background: Herbal medicines constitute a vital component of Thailand's healthcare system and possess significant potential for international recognition. However, the absence of standardized clinical research guidelines aligned with international standards, coupled with unique local challenges, has hindered the development and registration of Thai herbal medicines in the global market. Objective: To establish comprehensive research and development guidelines for herbal medicine formulations that comply with international standards, with particular emphasis on enhancing research quality, scientific credibility, and facilitating both domestic registration and international market acceptance. Methods: The research methodology comprised eight sequential phases: (1) systematic collection and review of relevant documentation and regulatory frameworks; (2) development of preliminary content structure and template designs; (3) systematic analysis and synthesis of scientific evidence and regulatory data; (4) creation of detailed research guidelines and accompanying templates; (5) execution of domestic and international consultation meetings and study visits involving nine stakeholder groups; (6) systematic expert review of the draft guidelines; (7) incorporation of feedback from relevant regulatory and research agencies; and (8) finalization and validation of the comprehensive guidelines. Results: The study produced comprehensive research and development guidelines for herbal medicines that meet international standards, encompassing the complete development pathway from initial concept through pre-clinical studies, product development, preparation protocols, clinical trial conduct, and product registration procedures. The guidelines include standardized templates and forms specifically designed for clinical research documentation. Conclusion: The established guidelines represent a significant advancement in standardizing clinical research for Thai herbal medicines, enhancing their scientific credibility and potential for international acceptance. Nevertheless, Thailand continues to face specific challenges, including insufficient specialized personnel in herbal research (particularly in clinical trials), challenges in integrating traditional Thai medicine principles with modern scientific methodology, limited research infrastructure, inadequate funding mechanisms, complex registration procedures, and public skepticism toward herbal products. The policy recommendations outlined in this research provide a strategic framework for addressing these challenges and promoting sustainable development of Thai herbal medicines within the national context.Keywords: herbal medicine, clinical research, international standards, research guidelines, drug development, traditional thai medicine, regulatory compliance
Procedia PDF Downloads 81068 An Efficient Process Analysis and Control Method for Tire Mixing Operation
Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park
Abstract:
Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process
Procedia PDF Downloads 2661067 A Research on the Improvement of Small and Medium-Sized City in Early-Modern China (1895-1927): Taking Southern Jiangsu as an Example
Authors: Xiaoqiang Fu, Baihao Li
Abstract:
In 1895, the failure of Sino-Japanese prompted the trend of comprehensive and systematic study of western pattern in China. In urban planning and construction, urban reform movement sprang up slowly, which aimed at renovating and reconstructing the traditional cities into modern cities similar to the concessions. During the movement, Chinese traditional city initiated a process of modern urban planning for its modernization. Meanwhile, the traditional planning morphology and system started to disintegrate, on the contrary, western form and technology had become the paradigm. Therefore, the improvement of existing cities had become the prototype of urban planning of early modern China. Currently, researches of the movement mainly concentrate on large cities, concessions, railway hub cities and some special cities resembling those. However, the systematic research about the large number of traditional small and medium-sized cities is still blank, up to now. This paper takes the improvement constructions of small and medium-sized cities in Southern region of Jiangsu Province as the research object. First of all, the criteria of small and medium-sized cities are based on the administrative levels of general office and cities at the county level. Secondly, the suitability of taking the Southern Jiangsu as the research object. The southern area of Jiangsu province called Southern Jiangsu for short, was the most economically developed region in Jiangsu, and also one of the most economically developed and the highest urbanization regions in China. As the most developed agricultural areas in ancient China, Southern Jiangsu formed a large number of traditional small and medium-sized cities. In early modern times, with the help of the Shanghai economic radiation, geographical advantage and powerful economic foundation, Southern Jiangsu became an important birthplace of Chinese national industry. Furthermore, the strong business atmosphere promoted the widespread urban improvement practices, which were incomparable of other regions. Meanwhile, the demonstration of Shanghai, Zhenjiang, Suzhou and other port cities became the improvement pattern of small and medium-sized city in Southern Jiangsu. This paper analyzes the reform movement of the small and medium-sized cities in Southern Jiangsu (1895-1927), including the subjects, objects, laws, technologies and the influence factors of politic and society, etc. At last, this paper reveals the formation mechanism and characteristics of urban improvement movement in early modern China. According to the paper, the improvement of small-medium city was a kind of gestation of the local city planning culture in early modern China,with a fusion of introduction and endophytism.Keywords: early modern China, improvement of small-medium city, southern region of Jiangsu province, urban planning history of China
Procedia PDF Downloads 2601066 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems
Authors: Alejandro Adorjan
Abstract:
Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.Keywords: calculus, engineering education, PreCalculus, Summer Program
Procedia PDF Downloads 2911065 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 101