Search results for: window location
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2584

Search results for: window location

154 Diasporic Literature

Authors: Shamsher Singh

Abstract:

The Diaspora Literature involves a concept of native land, from where the displacement occurs and a record of harsh journeys undertaken on account of economic compulsions. Basically, Diaspora is a splintered community living in eviction. The scattering (initially) signifies the location of a fluid human autonomous space involving a complex set of negotiations and exchange between the nostalgia and desire for the native land and the making of a new home, adapting to the relationships between the minority and majority, being spokes persons for minority rights and their people back native place and significantly transacting the Contact Zone - a space changed with the possibility of multiple challenges. They write in the background of the sublime qualities of their homeland and, at the same time, try to fit themselves into the traditions and cultural values of other strange communities or land. It also serves as an interconnection of the various cultures involved, and it is used to understand the customs of different cultures and countries; it is also a source of inspiration globally. Although diasporic literature originated back in the 20th century, it spread to other countries like Britain, Canada, America, Denmark, Netherland, Australia, Kenya, Sweden, Kuwait and different parts of Europe. Meaning of Diaspora is the combination of two words which means the movement of people away from their own country or motherland. From a historical point of view, the ‘Diaspora’ is often associated with Jewish bigotry. At the moment, the Diaspora is used for the dispersal of social or cultural groups. This group will be living in two different streams of cultures at the same time. One who left behind his culture and the other has to adapt himself to new cultural situations. The diasporic mind hangs between his birth land and place of work at the same time. A person’s mental state, living in dual existence, gives birth to Dysphoria sensation. Litterateurs had different experiences in this type of sensation e.g., social, universal, political, economic and experiences from the strange land. The struggle of these experiences is seen in diasporic literature. When a person moves to different land or country to fulfill his dreams, the discrimination of language, work and other difficulties with strangers make his relationship more emotional and deeper into his past. These past memories and relations create more difficulties in settling in a foreign land. He lives there physically, but his mental state is in his past constantly, and he ends up his life in those background memories. A person living in Diaspora is actually a dual visionary man. Although this double vision expands his global consciousness, due to this vision, he gains judgemental qualities to understand others. At the same time, he weighs his respect for his native land and the situations of foreign land he experiences, and he finds it difficult to survive in those conditions. It can be said that diaspora literature indicates a person or social organization who lives dual life inquisition structure which becomes the cause of diasporic literature.

Keywords: homeland sickness, language problem, quest for identity, materialistic desire

Procedia PDF Downloads 42
153 Valorization of Surveillance Data and Assessment of the Sensitivity of a Surveillance System for an Infectious Disease Using a Capture-Recapture Model

Authors: Jean-Philippe Amat, Timothée Vergne, Aymeric Hans, Bénédicte Ferry, Pascal Hendrikx, Jackie Tapprest, Barbara Dufour, Agnès Leblond

Abstract:

The surveillance of infectious diseases is necessary to describe their occurrence and help the planning, implementation and evaluation of risk mitigation activities. However, the exact number of detected cases may remain unknown whether surveillance is based on serological tests because identifying seroconversion may be difficult. Moreover, incomplete detection of cases or outbreaks is a recurrent issue in the field of disease surveillance. This study addresses these two issues. Using a viral animal disease as an example (equine viral arteritis), the goals were to establish suitable rules for identifying seroconversion in order to estimate the number of cases and outbreaks detected by a surveillance system in France between 2006 and 2013, and to assess the sensitivity of this system by estimating the total number of outbreaks that occurred during this period (including unreported outbreaks) using a capture-recapture model. Data from horses which exhibited at least one positive result in serology using viral neutralization test between 2006 and 2013 were used for analysis (n=1,645). Data consisted of the annual antibody titers and the location of the subjects (towns). A consensus among multidisciplinary experts (specialists in the disease and its laboratory diagnosis, epidemiologists) was reached to consider seroconversion as a change in antibody titer from negative to at least 32 or as a three-fold or greater increase. The number of seroconversions was counted for each town and modeled using a unilist zero-truncated binomial (ZTB) capture-recapture model with R software. The binomial denominator was the number of horses tested in each infected town. Using the defined rules, 239 cases located in 177 towns (outbreaks) were identified from 2006 to 2013. Subsequently, the sensitivity of the surveillance system was estimated as the ratio of the number of detected outbreaks to the total number of outbreaks that occurred (including unreported outbreaks) estimated using the ZTB model. The total number of outbreaks was estimated at 215 (95% credible interval CrI95%: 195-249) and the surveillance sensitivity at 82% (CrI95%: 71-91). The rules proposed for identifying seroconversion may serve future research. Such rules, adjusted to the local environment, could conceivably be applied in other countries with surveillance programs dedicated to this disease. More generally, defining ad hoc algorithms for interpreting the antibody titer could be useful regarding other human and animal diseases and zoonosis when there is a lack of accurate information in the literature about the serological response in naturally infected subjects. This study shows how capture-recapture methods may help to estimate the sensitivity of an imperfect surveillance system and to valorize surveillance data. The sensitivity of the surveillance system of equine viral arteritis is relatively high and supports its relevance to prevent the disease spreading.

Keywords: Bayesian inference, capture-recapture, epidemiology, equine viral arteritis, infectious disease, seroconversion, surveillance

Procedia PDF Downloads 267
152 The Effect of Manure Loaded Biochar on Soil Microbial Communities

Authors: T. Weber, D. MacKenzie

Abstract:

The script in this paper describes the use of advanced simulation environment using electronic systems (microcontroller, operational amplifiers, and FPGA). The simulation was used for non-linear dynamic systems behaviour with required observer structure working with parallel real-time simulation based on state-space representation. The proposed deposited model was used for electrodynamic effects including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time and such systems. For further purpose, the spatial temperature distribution may also be used. With upon system, the uncertainties and disturbances may be determined. This provides the estimation of the more precise system states for the required system and additionally the estimation of the ionising disturbances that arise due to radiation effects in space systems. The results have also shown that a system can be developed specifically with the real-time calculation (estimation) of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. TID (Total Ionising Dose) of 1 Gy and Single Effect Transient (SET) free operation up to 50 MeVcm²/mg may assure certain functions. Single-Event Latch-up (SEL) results on the placement of several transistors in the shared substrate of an integrated circuit; ionising radiation can activate an additional parasitic thyristor. This short circuit between semiconductor-elements can destroy the device without protection and measurements. Single-Event Burnout (SEB) on the other hand, increases current between drain and source of a MOSFET and destroys the component in a short time. A Single-Event Gate Rupture (SEGR) can destroy a dielectric of semiconductor also. In order to be able to react to these processes, it must be calculated within a shorter time that ionizing radiation and dose is present. For this purpose, sensors may be used for the realistic evaluation of the diffusion and ionizing effects of the test system. For this purpose, the Peltier element is used for the evaluation of the dynamic temperature increases (dT/dt), from which a measure of the ionization processes and thus radiation will be detected. In addition, the piezo element may be used to record highly dynamic vibrations and oscillations to absorb impacts of charged particle flux. All available sensors shall be used to calibrate the spatial distributions also. By measured value of size and known location of the sensors, the entire distribution in space can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms.

Keywords: cattle, biochar, manure, microbial activity

Procedia PDF Downloads 80
151 A Socio-Spatial Analysis of Financialization and the Formation of Oligopolies in Brazilian Basic Education

Authors: Gleyce Assis Da Silva Barbosa

Abstract:

In recent years, we have witnessed a vertiginous growth of large education companies. Daughters of national and world capital, these companies expand both through consolidated physical networks in the form of branches spread across the territory and through institutional networks such as business networks through mergers, acquisitions, creation of new companies and influence. They do this by incorporating small, medium and large schools and universities, teaching systems and other products and services. They are also able to weave their webs directly or indirectly in philanthropic circles, limited partnerships, family businesses and even in public education through various mechanisms of outsourcing, privatization and commercialization of products for the sector. Although the growth of these groups in basic education seems to us a recent phenomenon in peripheral countries such as Brazil, its diffusion is closely linked to higher education conglomerates and other sectors of the economy forming oligopolies, which began to expand in the 1990s with strong state support and through political reforms that redefined its role, transforming it into a fundamental agent in the formation of guidelines to boost the incorporation of neoliberal logic. This expansion occurred through the objectification of education, commodifying it and transforming students into consumer clients. Financial power combined with the neo-liberalization of state public policies allowed the profusion of social exclusion, the increase of individuals without access to basic services, deindustrialization, automation, capital volatility and the indetermination of the economy; in addition, this process causes capital to be valued and devalued at rates never seen before, which together generates various impacts such as the precariousness of work. Understanding the connection between these processes, which engender the economy, allows us to see their consequences in labor relations and in the territory. In this sense, it is necessary to analyze the geographic-economic context and the role of the facilitating agents of this process, which can give us clues about the ongoing transformations and the directions of education in the national and even international scenario since this process is linked to the multiple scales of financial globalization. Therefore, the present research has the general objective of analyzing the socio-spatial impacts of financialization and the formation of oligopolies in Brazilian basic education. For this, the survey of laws, data, and public policies on the subject in question was used as a methodology. As a methodology, the work was based on some data from these companies available on websites for investors. Survey of information from global and national companies that operate in Brazilian basic education. In addition to mapping the expansion of educational oligopolies using public data on the location of schools. With this, the research intends to provide information about the ongoing commodification process in the country. Discuss the consequences of the oligopolization of education, considering the impacts that financialization can bring to teaching work.

Keywords: financialization, oligopolies, education, Brazil

Procedia PDF Downloads 38
150 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89

Authors: A. Chatel, I. S. Torreguitart, T. Verstraete

Abstract:

The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.

Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness

Procedia PDF Downloads 90
149 Developing Digital Competencies in Aboriginal Students through University-College Partnerships

Authors: W. S. Barber, S. L. King

Abstract:

This paper reports on a pilot project to develop a collaborative partnership between a community college in rural northern Ontario, Canada, and an urban university in the greater Toronto area in Oshawa, Canada. Partner institutions will collaborate to address learning needs of university applicants whose goals are to attain an undergraduate university BA in Educational Studies and Digital Technology degree, but who may not live in a geographical location that would facilitate this pathways process. The UOIT BA degree is attained through a 2+2 program, where students with a 2 year college diploma or equivalent can attain a four year undergraduate degree. The goals reported on the project are as: 1. Our aim is to expand the BA program to include an additional stream which includes serious educational games, simulations and virtual environments, 2. Develop fully (using both synchronous and asynchronous technologies) online learning modules for use by university applicants who otherwise are not geographically located close to a physical university site, 3. Assess the digital competencies of all students, including members of local, distance and Indigenous communities using a validated tool developed and tested by UOIT across numerous populations. This tool, the General Technical Competency Use and Scale (GTCU) will provide the collaborating institutions with data that will allow for analyzing how well students are prepared to succeed in fully online learning communities. Philosophically, the UOIT BA program is based on a fully online learning communities model (FOLC) that can be accessed from anywhere in the world through digital learning environments via audio video conferencing tools such as Adobe Connect. It also follows models of adult learning and mobile learning, and makes a university degree accessible to the increasing demographic of adult learners who may use mobile devices to learn anywhere anytime. The program is based on key principles of Problem Based Learning, allowing students to build their own understandings through the co-design of the learning environment in collaboration with the instructors and their peers. In this way, this degree allows students to personalize and individualize the learning based on their own culture, background and professional/personal experiences. Using modified flipped classroom strategies, students are able to interrogate video modules on their own time in preparation for one hour discussions occurring in video conferencing sessions. As a consequence of the program flexibility, students may continue to work full or part time. All of the partner institutions will co-develop four new modules, administer the GTCU and share data, while creating a new stream of the UOIT BA degree. This will increase accessibility for students to bridge from community colleges to university through a fully digital environment. We aim to work collaboratively with Indigenous elders, community members and distance education instructors to increase opportunities for more students to attain a university education.

Keywords: aboriginal, college, competencies, digital, universities

Procedia PDF Downloads 197
148 Treatment Outcome Of Corneal Ulcers Using Levofloxacin Hydrate 1.5% Ophthalmic Solution And Adjuvant Oral Ciprofloxacin, A Treatment Strategy Applicable To Primary Healthcare

Authors: Celine Shi Ying Lee, Jong Jian Lee

Abstract:

Background: Infectious keratitis is one of the leading causes of blindness worldwide. Prompt treatment with effective medication will control the infection early, preventing corneal scarring and visual loss. fluoroquinolones ophthalmic medication is used because of its broad-spectrum properties, potency, good intraocular penetration, and low toxicity. The study aims to evaluate the treatment outcome of corneal ulcers using Levofloxacin 1.5% ophthalmic solution (LVFX) with adjuvant oral ciprofloxacin when indicated and apply this treatment strategy in primary health care as first-line treatment. Methods: Patients with infective corneal ulcer treated in an eye center were recruited. Inclusion criteria includes Corneal infection consistent with bacterial keratitis, single or multiple small corneal ulcers. Treatment regime: LVFX hourly for the first 2 days, 2 hourly from the 3rd day, and 3 hourly on the 5th day of review. Adjuvant oral ciprofloxacin 500mg BD was administered for 5 days if there were multiple corneal ulcers or when the location of the cornea ulcer was central or paracentral. Results: 47 subjects were recruited. There were 16 (34%) males and 31 (66%) females. 40 subjects (85%) were contact lens (CL) related to corneal ulcer, and 7 subjects (15%) were non-contact lens related. 42 subjects (89%) presented with one ulcer, of which 20 of them (48%) needed adjuvant therapy. 5 subjects presented with 2 or 3 ulcers, of which 3 needed adjuvant therapy. A total of 23 subjects (49%) was given adjuvant therapy (oral ciprofloxacin 500mg BD for 5 days).21 of them (91%) were CL related. All subjects recovered fully, and the average duration of treatment was 3.7 days, with 49% of the subjects resolved on the 3rd day, 38% on the 5thday of and 13% on the 7thday. All subjects showed symptoms of relief of pain, light-sensitivity, and redness on the 3rd day with full visual recovery post-treatment. No adverse drug reactions were recorded. Conclusion: Our treatment regime demonstrated good clinical outcome as first-line treatment for corneal ulcers. A corneal ulcer is a common eye condition in Singapore, mainly due to CL wear. Pseudomonas aeruginosa is the most frequent and potentially sight-threatening pathogen involved in CL related corneal ulcer. Coagulase-negative Staphylococci, Staphylococcus aureus, and Streptococcus Pneumoniae were seen in non-CL users. All these bacteria exhibit good sensitivity rates to ciprofloxacin and levofloxacin. It is therefore logical in our study to use LVFX Eyedrops and adjuvant ciprofloxacin oral antibiotics when indicated as first line treatment for most corneal ulcers. Our study of patients, both CL related and non-CL related, have shown good clinical response and full recovery using the above treatment strategy. There was also a full restoration of visual acuity in all the patients. Eye-trained primary Healthcare practitioners can consider adopting this treatment strategy as first line treatment in patients with corneal ulcers. This is relevant during the COVID pandemic, where hospitals are overwhelmed with patients and in regions with limited access to specialist eye care. This strategy would enable early treatment with better clinical outcome.

Keywords: corneal ulcer, levofloxacin hydrate, treatment strategy, ciprofloxacin

Procedia PDF Downloads 151
147 Atypical Intoxication Due to Fluoxetine Abuse with Symptoms of Amnesia

Authors: Ayse Gul Bilen

Abstract:

Selective serotonin reuptake inhibitors (SSRIs) are commonly prescribed antidepressants that are used clinically for the treatment of anxiety disorders, obsessive-compulsive disorder (OCD), panic disorders and eating disorders. The first SSRI, fluoxetine (sold under the brand names Prozac and Sarafem among others), had an adverse effect profile better than any other available antidepressant when it was introduced because of its selectivity for serotonin receptors. They have been considered almost free of side effects and have become widely prescribed, however questions about the safety and tolerability of SSRIs have emerged with their continued use. Most SSRI side effects are dose-related and can be attributed to serotonergic effects such as nausea. Continuous use might trigger adverse effects such as hyponatremia, tremor, nausea, weight gain, sleep disturbance and sexual dysfunction. Moderate toxicity can be safely observed in the hospital for 24 hours, and mild cases can be safely discharged (if asymptomatic) from the emergency department once cleared by Psychiatry in cases of intentional overdose and after 6 to 8 hours of observation. Although fluoxetine is relatively safe in terms of overdose, it might still be cardiotoxic and inhibit platelet secretion, aggregation, and plug formation. There have been reported clinical cases of seizures, cardiac conduction abnormalities, and even fatalities associated with fluoxetine ingestions. While the medical literature strongly suggests that most fluoxetine overdoses are benign, emergency physicians need to remain cognizant that intentional, high-dose fluoxetine ingestions may induce seizures and can even be fatal due to cardiac arrhythmia. Our case is a 35-year old female patient who was sent to ER with symptoms of confusion, amnesia and loss of orientation for time and location after being found wandering in the streets unconsciously by police forces that informed 112. Upon laboratory examination, no pathological symptom was found except sinus tachycardia in the EKG and high levels of aspartate transaminase (AST) and alanine transaminase (ALT). Diffusion MRI and computed tomography (CT) of the brain all looked normal. Upon physical and sexual examination, no signs of abuse or trauma were found. Test results for narcotics, stimulants and alcohol were negative as well. There was a presence of dysrhythmia which required admission to the intensive care unit (ICU). The patient gained back her conscience after 24 hours. It was discovered from her story afterward that she had been using fluoxetine due to post-traumatic stress disorder (PTSD) for 6 months and that she had attempted suicide after taking 3 boxes of fluoxetine due to the loss of a parent. She was then transferred to the psychiatric clinic. Our study aims to highlight the need to consider toxicologic drug use, in particular, the abuse of selective serotonin reuptake inhibitors (SSRIs), which have been widely prescribed due to presumed safety and tolerability, for diagnosis of patients applying to the emergency room (ER).

Keywords: abuse, amnesia, fluoxetine, intoxication, SSRI

Procedia PDF Downloads 180
146 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal

Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi

Abstract:

The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.

Keywords: low income, subsidy policy, distributed energy resources, energy justice

Procedia PDF Downloads 82
145 Disaster Management Approach for Planning an Early Response to Earthquakes in Urban Areas

Authors: Luis Reynaldo Mota-Santiago, Angélica Lozano

Abstract:

Determining appropriate measures to face earthquakesarea challenge for practitioners. In the literature, some analyses consider disaster scenarios, disregarding some important field characteristics. Sometimes, software that allows estimating the number of victims and infrastructure damages is used. Other times historical information of previous events is used, or the scenarios’informationis assumed to be available even if it isnot usual in practice. Humanitarian operations start immediately after an earthquake strikes, and the first hours in relief efforts are important; local efforts are critical to assess the situation and deliver relief supplies to the victims. A preparation action is prepositioning stockpiles, most of them at central warehouses placed away from damage-prone areas, which requires large size facilities and budget. Usually, decisions in the first 12 hours (standard relief time (SRT)) after the disaster are the location of temporary depots and the design of distribution paths. The motivation for this research was the delay in the reaction time of the early relief efforts generating the late arrival of aid to some areas after the Mexico City 7.1 magnitude earthquake in 2017. Hence, a preparation approach for planning the immediate response to earthquake disasters is proposed, intended for local governments, considering their capabilities for planning and for responding during the SRT, in order to reduce the start-up time of immediate response operations in urban areas. The first steps are the generation and analysis of disaster scenarios, which allow estimatethe relief demand before and in the early hours after an earthquake. The scenarios can be based on historical data and/or the seismic hazard analysis of an Atlas of Natural Hazards and Risk as a way to address the limited or null available information.The following steps include the decision processes for: a) locating local depots (places to prepositioning stockpiles)and aid-giving facilities at closer places as possible to risk areas; and b) designing the vehicle paths for aid distribution (from local depots to the aid-giving facilities), which can be used at the beginning of the response actions. This approach allows speeding up the delivery of aid in the early moments of the emergency, which could reduce the suffering of the victims allowing additional time to integrate a broader and more streamlined response (according to new information)from national and international organizations into these efforts. The proposed approachis applied to two case studies in Mexico City. These areas were affectedby the 2017’s earthquake, having limited aid response. The approach generates disaster scenarios in an easy way and plans a faster early response with a short quantity of stockpiles which can be managed in the early hours of the emergency by local governments. Considering long-term storage, the estimated quantities of stockpiles require a limited budget to maintain and a small storage space. These stockpiles are useful also to address a different kind of emergencies in the area.

Keywords: disaster logistics, early response, generation of disaster scenarios, preparation phase

Procedia PDF Downloads 91
144 Microplastic Concentrations in Cultured Oyster in Two Bays of Baja California, Mexico

Authors: Eduardo Antonio Lozano Hernandez, Nancy Ramirez Alvarez, Lorena Margarita Rios Mendoza, Jose Vinicio Macias Zamora, Felix Augusto Hernandez Guzman, Jose Luis Sanchez Osorio

Abstract:

Microplastics (MPs) are one of the most numerous reported wastes found in the marine ecosystem, representing one of the greatest risks for organisms that inhabit that environment due to their bioavailability. Such is the case of bivalve mollusks, since they are capable of filtering large volumes of water, which increases the risk of contamination by microplastics through the continuous exposure to these materials. This study aims to determine, quantify and characterize microplastics found in the cultured oyster Crassostrea gigas. We also analyzed if there are spatio-temporal differences in the microplastic concentration of organisms grown in two bays having quite different human population. In addition, we wanted to have an idea of the possible impact on humans via consumption of these organisms. Commercial size organisms (>6cm length; n = 15) were collected by triplicate from eight oyster farming sites in Baja California, Mexico during winter and summer. Two sites are located in Todos Santos Bay (TSB), while the other six are located in San Quintin Bay (SQB). Site selection was based on commercial concessions for oyster farming in each bay. The organisms were chemically digested with 30% KOH (w/v) and 30% H₂O₂ (v/v) to remove the organic matter and subsequently filtered using a GF/D filter. All particles considered as possible MPs were quantified according to their physical characteristics using a stereoscopic microscope. The type of synthetic polymer was determined using a FTIR-ATR microscope and using a user as well as a commercial reference library (Nicolet iN10 Thermo Scientific, Inc.) of IR spectra of plastic polymers (with a certainty ≥70% for polymers pure; ≥50% for composite polymers). Plastic microfibers were found in all the samples analyzed. However, a low incidence of MP fragments was observed in our study (approximately 9%). The synthetic polymers identified were mainly polyester and polyacrylonitrile. In addition, polyethylene, polypropylene, polystyrene, nylon, and T. elastomer. On average, the content of microplastics in organisms were higher in TSB (0.05 ± 0.01 plastic particles (pp)/g of wet weight) than found in SQB (0.02 ± 0.004 pp/g of wet weight) in the winter period. The highest concentration of MPs found in TSB coincides with the rainy season in the region, which increases the runoff from streams and wastewater discharges to the bay, as well as the larger population pressure (> 500,000 inhabitants). Otherwise, SQB is a mainly rural location, where surface runoff from streams is minimal and in addition, does not have a wastewater discharge into the bay. During the summer, no significant differences (Manne-Whitney U test; P=0.484) were observed in the concentration of MPs found in the cultured oysters of TSB and SQB, (average: 0.01 ± 0.003 pp/g and 0.01 ± 0.002 pp/g, respectively). Finally, we concluded that the consumption of oyster does not represent a risk for humans due to the low concentrations of MPs found. The concentration of MPs is influenced by the variables such as temporality, circulations dynamics of the bay and existing demographic pressure.

Keywords: FTIR-ATR, Human risk, Microplastic, Oyster

Procedia PDF Downloads 149
143 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 56
142 Giving Children with Osteogenesis Imperfecta a Voice: Overview of a Participatory Approach for the Development of an Interactive Communication Tool

Authors: M. Siedlikowski, F. Rauch, A. Tsimicalis

Abstract:

Osteogenesis Imperfecta (OI) is a genetic disorder of childhood onset that causes frequent fractures after minimal physical stress. To date, OI research has focused on medically- and surgically-oriented outcomes with little attention on the perspective of the affected child. It is a challenge to elicit the child’s voice in health care, in other words, their own perspective on their symptoms, but software development offers a way forward. Sisom (Norwegian acronym derived from ‘Si det som det er’ meaning ‘Tell it as it is’) is an award-winning, rigorously tested, interactive, computerized tool that helps children with chronic illnesses express their symptoms to their clinicians. The successful Sisom software tool, that addresses the child directly, has not yet been adapted to attend to symptoms unique to children with OI. The purpose of this study was to develop a Sisom paper prototype for children with OI by seeking the perspectives of end users, particularly, children with OI and clinicians. Our descriptive qualitative study was conducted at Shriners Hospitals for Children® – Canada, which follows the largest cohort of children with OI in North America. Purposive sampling was used to recruit 12 children with OI over three cycles. Nine clinicians oversaw the development process, which involved determining the relevance of current Sisom symptoms, vignettes, and avatars, as well as generating new Sisom OI components. Data, including field notes, transcribed audio-recordings, and drawings, were deductively analyzed using content analysis techniques. Guided by the following framework, data pertaining to symptoms, vignettes, and avatars were coded into five categories: a) Relevant; b) Irrelevant; c) To modify; d) To add; e) Unsure. Overall, 70.8% of Sisom symptoms were deemed relevant for inclusion, with 49.4% directly incorporated, and 21.3% incorporated with changes to syntax, and/or vignette, and/or location. Three additions were made to the ‘Avatar’ island. This allowed children to celebrate their uniqueness: ‘Makes you feel like you’re not like everybody else.’ One new island, ‘About Me’, was added to capture children’s worldviews. One new sub-island, ‘Getting Around’, was added to reflect accessibility issues. These issues were related to the children’s independence, their social lives, as well as the perceptions of others. In being consulted as experts throughout the co-creation of the Sisom OI paper prototype, children coded the Sisom symptoms and provided sound rationales for their chosen codes. In rationalizing their codes, all children shared personal stories about themselves and their relationships, insights about their OI, and an understanding of the strengths and challenges they experience on a day-to-day basis. The child’s perspective on their health is a basic right, and allowing it to be heard is the next frontier in the care of children with genetic diseases. Sisom OI, a methodological breakthrough within OI research, will offer clinicians an innovative and child-centered approach to capture this neglected perspective. It will provide a tool for the delivery of health care in the center that established the worldwide standard of care for children with OI.

Keywords: child health, interactive computerized communication tool, participatory approach, symptom management

Procedia PDF Downloads 135
141 Beyond Black Friday: The Value of Collaborative Research on Seasonal Shopping Events and Behavior

Authors: Jasmin H. Kwon , Thomas M. Brinthaupt

Abstract:

There is a general lack of consumer behavior research on seasonal shopping events. Studying these kinds of events is interesting and important for several reasons. First, global shopping opportunities have implications for cross-cultural shopping events and effects on seasonal events in other countries. Second, seasonal shopping events are subject to economic conditions and may wane in popularity, especially with e-commerce options. Third, retailers can expand the success of their seasonal shopping events by taking advantage of cross-cultural opportunities. Fourth, it is interesting to consider how consumers from other countries might take advantage of different countries’ seasonal shopping events. Many countries have seasonal shopping events such as Black Friday. Research on these kinds of events can lead to the identification of cross-cultural similarities and differences in consumer behavior. We compared shopping motivations of college students who did (n=36) and did not (n=81) shop on Cyber Monday. The results showed that the groups did not differ significantly on any of the shopping motivation subscales. The Cyber Monday shoppers reported being significantly more likely to agree than disagree that their online shopping experience was enjoyable and exciting. They were more likely to disagree than agree that their experience was overwhelming. In addition, they agreed that they shopped only for deals, purchased the exact items they wanted, and thought that their efforts were worth it. Finally, they intended to shop again at next year’s Cyber Monday. It appears that there are many positive aspects to online seasonal shopping, independent of one’s typical shopping motivations. Different countries have seasonal events similar to the Black Friday and Cyber Monday shopping holiday (e.g., Boxing Day, Fukubukuro, China’s Singles Day). In Korea, there is increasing interest in taking advantage of U.S. Black Friday and Cyber Monday opportunities. Government officials are interested in adapting the U.S. holiday to Korean retailers, essentially recreating the Black Friday/Cyber Monday holiday there. Similarly, the Japanese Fukubukuro ('Lucky Bag') holiday is being adapted by other countries such as Korea and the U.S. International shipping support companies are also emerging that help customers to identify and receive products from other countries. U.S. department stores also provide free shipping on international orders for certain items. As these structural changes are occurring and new options for global shopping emerge, the need to understand the role of shoppers’ motivations becomes even more important. For example, the Cyber Monday results are particularly relevant to the new landscape with e-commerce and cross-cultural opportunities, since many of these events involve e-commerce. Within today’s global market, physical location of a retail store is no longer a limitation to growing one’s market share. From a consumer perspective, it is important to investigate how shopping motivations are related to e-commerce seasonal events. From a retail perspective, understanding the shopping motivations of international customers would help retailers to expand and better tailor their seasonal shopping events beyond the boundaries of their own countries. From a collaborative perspective, research on this topic can include interdisciplinary researchers, including those from fashion merchandising, marketing, retailing, and psychology.

Keywords: Black Friday, cross-cultural research, Cyber Monday, seasonal shopping behavior

Procedia PDF Downloads 373
140 Determinants of Life Satisfaction in Canada: A Causal Modelling Approach

Authors: Rose Branch-Allen, John Jayachandran

Abstract:

Background and purpose: Canada is a pluralistic, multicultural society with an ethno-cultural composition that has been shaped over time by immigrants and their descendants. Although Canada welcomes these immigrants, many will endure hardship and assimilation difficulties. Despite these life hurdles, surveys consistently disclose high life satisfaction for all Canadians. Most research studies on Life Satisfaction/ Subjective Wellbeing (SWB) have focused on one main determinant and a variety of social demographic variables to delineate the determinants of life satisfaction. However, very few research studies examine life satisfaction from a holistic approach. In addition, we need to understand the causal pathways leading to life satisfaction, and develop theories that explain why certain variables differentially influence the different components of SWB. The aim this study was to utilize a holistic approach to construct a causal model and identify major determinants of life satisfaction. Data and measures: This study utilized data from the General Social Survey, with a sample size of 19, 597. The exogenous concepts included age, gender, marital status, household size, socioeconomic status, ethnicity, location, immigration status, religiosity, and neighborhood. The intervening concepts included health, social contact, leisure, enjoyment, work-family balance, quality time, domestic labor, and sense of belonging. The endogenous concept life satisfaction was measured by multiple indicators (Cronbach’s alpha = .83). Analysis: Several multiple regression models were run sequentially to estimate path coefficients for the causal model. Results: Overall, above average satisfaction with life was reported for respondents with specific socio-economic, demographic and lifestyle characteristics. With regard to exogenous factors, respondents who were female, younger, married, from high socioeconomic status background, born in Canada, very religious, and demonstrated high level of neighborhood interaction had greater satisfaction with life. Similarly, intervening concepts suggested respondents had greater life satisfaction if they had better health, more social contact, less time on passive leisure activities and more time on active leisure activities, more time with family and friends, more enjoyment with volunteer activities, less time on domestic labor and a greater sense of belonging to the community. Conclusions and Implications: Our results suggest that a holistic approach is necessary for establishing determinants of life satisfaction, and that life satisfaction is not merely comprised of positive or negative affect rather understanding the causal process of life satisfaction. Even though, most of our findings are consistent with previous studies, a significant number of causal connections contradict some of the findings in literature today. We have provided possible explanation for these anomalies researchers encounter in studying life satisfaction and policy implications.

Keywords: causal model, holistic approach, life satisfaction, socio-demographic variables, subjective well-being

Procedia PDF Downloads 332
139 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 88
138 Ternary Organic Blend for Semitransparent Solar Cells with Enhanced Short Circuit Current Density

Authors: Mohammed Makha, Jakob Heier, Frank Nüesch, Roland Hany

Abstract:

Organic solar cells (OSCs) have made rapid progress and currently achieve power conversion efficiencies (PCE) of over 10%. OSCs have several merits over other direct light-to-electricity generating cells and can be processed at low cost from solution on flexible substrates over large areas. Moreover, combining organic semiconductors with transparent and conductive electrodes allows for the fabrication of semitransparent OSCs (SM-OSCs). For SM-OSCs the challenge is to achieve a high average visible transmission (AVT) while maintaining a high short circuit current (Jsc). Typically, Jsc of SM-OSCs is smaller than when using an opaque metal top electrode. This is because the non-absorbed light during the first transit through the active layer and the transparent electrode is forward-transmitted out of the device. Recently, OSCs using a ternary blend of organic materials have received attention. This strategy was pursued to extend the light harvesting over the visible range. However, it is a general challenge to manipulate the performance of ternary OSCs in a predictable way, because many key factors affect the charge generation and extraction in ternary solar cells. Consequently, the device performance is affected by the compatibility between the blend components and the resulting film morphology, the energy levels and bandgaps, the concentration of the guest material and its location in the active layer. In this work, we report on a solvent-free lamination process for the fabrication of efficient and semitransparent ternary blend OSCs. The ternary blend was composed of PC70BM and the electron donors PBDTTT-C and an NIR cyanine absorbing dye (Cy7T). Using an opaque metal top electrode, a PCE of 6% was achieved for the optimized binary polymer: fullerene blend (AVT = 56%). However, the PCE dropped to ~2% when decreasing (to 30 nm) the active film thickness to increase the AVT value (75%). Therefore we resorted to the ternary blend and measured for non-transparent cells a PCE of 5.5% when using an active polymer: dye: fullerene (0.7: 0.3: 1.5 wt:wt:wt) film of 95 nm thickness (AVT = 65% when omitting the top electrode). In a second step, the optimized ternary blend was used of the fabrication of SM-OSCs. We used a plastic/metal substrate with a light transmission of over 90% as a transparent electrode that was applied via a lamination process. The interfacial layer between the active layer and the top electrode was optimized in order to improve the charge collection and the contact with the laminated top electrode. We demonstrated a PCE of 3% with AVT of 51%. The parameter space for ternary OSCs is large and it is difficult to find the best concentration ratios by trial and error. A rational approach for device optimization is the construction of a ternary blend phase diagram. We discuss our attempts to construct such a phase diagram for the PBDTTT-C: Cy7T: PC70BM system via a combination of using selective Cy7T selective solvents and atomic force microscopy. From the ternary diagram suitable morphologies for efficient light-to-current conversion can be identified. We compare experimental OSC data with these predictions.

Keywords: organic photovoltaics, ternary phase diagram, ternary organic solar cells, transparent solar cell, lamination

Procedia PDF Downloads 242
137 School Accidents in Educational Establishment in Tunisia: A Five Years Retrospective Survey in the Governorate of Mahdia

Authors: Lamia Bouzgarrou, Amira Omrane, Leila Mrabet, Taoufik Khalfallah

Abstract:

Background and aims: School accidents are one of the leading causes of morbidity and mortality among pupils and students. Indeed, they may induce an elevated number of lost school days, heavy emotional and physical disabilities, and financial costs on the victims and their families. This study aims to evaluate the annual incidence of school accidents in the central Tunisian governorate of Mahdia and to identify the epidemiological profile of victims and risk factors of these accidents. Methods: A retrospective study was conducted over the period of 5 school years, focusing on school accidents that occurred in public educational institutions (primary, basic, secondary and university) in the governorate of Mahdia (area = 2 966 km² and number of inhabitants in 2014 = 410 812). All accidents declared near the only official insurance of this type of injuries (MASU: Mutual School and University Accidents), and initially taken in charge at the University Hospital of Mahdia were included. Data was collected from the MASU reporting forms and the medical records of emergency and other specialized hospital departments. Results: With 3248 identified victims, the annual incidence of school accidents was equal to 0.69 per 100 pupils and students per year. The average age of victims was 14.51 ± 0.059 years and the sex ratio was 1.58. Pupils aged between 12 and 15 years, were concerned by 46.7% of the identified accidents. The practice of sports was the most relevant circumstances of these accidents (76.2 %). In 56.58 % of cases, falls were the leading mechanism. Bruises and fractures were the most frequent lesions (32.43 % and 30.51 %). Serious school accidents were noted in 28% of cases with hospitalization in 2.27 % of them. The average lost school days, was 12.23±1.73 days. Accidents occurring during sports or leisure activities were significantly more serious (p= 0.021). Furthermore, the frequency of hospitalization was significantly higher among boys (2.81% vs. 1.43%; p= 0.035), students ≤11 years (p= 0.008), and following crush trauma (p= 0.000). In addition, the surgical interventions were statistically more frequent among male victims (p=0.00), accidents occurring during physical education sessions (p=0.000); those associated to falls (p=0.000) and to crushes mechanisms (p=0.002), and injuries affecting lower limbs (p=0.000). Following this Multi-varied analysis concluded that the severity of school accident is correlated to the activity practiced during the trauma and the geographical location of the school. Conclusion: Children and adolescents are one of the most vulnerable groups against incidents with the risk of permanent disability, mainly related to the perturbation of the growth process and physiological limitations. Our five-year study, objectified a real elevate incidence of school accident among children and adolescents, with a considerable rate of severe injuries. In any community, the promotion of adolescents and children’s health is an important indicator of the public health level. Thus, it’s important to develop a multidisciplinary prevention strategy of school accident, based on safety and security rules and adapted to the specificity of our context.

Keywords: children and adolescents, children health, injuries and disability, school accident

Procedia PDF Downloads 95
136 The Future Control Rooms for Sustainable Power Systems: Current Landscape and Operational Challenges

Authors: Signe Svensson, Remy Rey, Anna-Lisa Osvalder, Henrik Artman, Lars Nordström

Abstract:

The electric power system is undergoing significant changes. Thereby, the operation and control are becoming partly modified, more multifaceted and automated, and thereby supplementary operator skills might be required. This paper discusses developing operational challenges in future power system control rooms, posed by the evolving landscape of sustainable power systems, driven in turn by the shift towards electrification and renewable energy sources. A literature review followed by interviews and a comparison to other related domains with similar characteristics, a descriptive analysis was performed from a human factors perspective. Analysis is meant to identify trends, relationships, and challenges. A power control domain taxonomy includes a temporal domain (planning and real-time operation) and three operational domains within the power system (generation, switching and balancing). Within each operational domain, there are different control actions, either in the planning stage or in the real-time operation, that affect the overall operation of the power system. In addition to the temporal dimension, the control domains are divided in space between a multitude of different actors distributed across many different locations. A control room is a central location where different types of information are monitored and controlled, alarms are responded to, and deviations are handled by the control room operators. The operators’ competencies, teamwork skills, team shift patterns as well as control system designs are all important factors in ensuring efficient and safe electricity grid management. As the power system evolves with sustainable energy technologies, challenges are found. Questions are raised regarding whether the operators’ tacit knowledge, experience and operation skills of today are sufficient to make constructive decisions to solve modified and new control tasks, especially during disturbed operations or abnormalities. Which new skills need to be developed in planning and real-time operation to provide efficient generation and delivery of energy through the system? How should the user interfaces be developed to assist operators in processing the increasing amount of information? Are some skills at risk of being lost when the systems change? How should the physical environment and collaborations between different stakeholders within and outside the control room develop to support operator control? To conclude, the system change will provide many benefits related to electrification and renewable energy sources, but it is important to address the operators’ challenges with increasing complexity. The control tasks will be modified, and additional operator skills are needed to perform efficient and safe operations. Also, the whole human-technology-organization system needs to be considered, including the physical environment, the technical aids and the information systems, the operators’ physical and mental well-being, as well as the social and organizational systems.

Keywords: operator, process control, energy system, sustainability, future control room, skill

Procedia PDF Downloads 52
135 Diagnosis, Treatment, and Prognosis in Cutaneous Anaplastic Lymphoma Kinase-Positive Anaplastic Large Cell Lymphoma: A Narrative Review Apropos of a Case

Authors: Laura Gleason, Sahithi Talasila, Lauren Banner, Ladan Afifi, Neda Nikbakht

Abstract:

Primary cutaneous anaplastic large cell lymphoma (pcALCL) accounts for 9% of all cutaneous T-cell lymphomas. pcALCL is classically characterized as a solitary papulonodule that often enlarges, ulcerates, and can be locally destructive, but overall exhibits an indolent course with overall 5-year survival estimated to be 90%. Distinguishing pcALCL from systemic ALCL (sALCL) is essential as sALCL confers a poorer prognosis with average 5-year survival being 40-50%. Although extremely rare, there have been several cases of ALK-positive ALCL diagnosed on skin biopsy without evidence of systemic involvement, which poses several challenges in the classification, prognostication, treatment, and follow-up of these patients. Objectives: We present a case of cutaneous ALK-positive ALCL without evidence of systemic involvement, and a narrative review of the literature to further characterize that ALK-positive ALCL limited to the skin is a distinct variant with a unique presentation, history, and prognosis. A 30-year-old woman presented for evaluation of an erythematous-violaceous papule present on her right chest for two months. With the development of multifocal disease and persistent lymphadenopathy, a bone marrow biopsy and lymph node excisional biopsy were performed to assess for systemic disease. Both biopsies were unrevealing. The patient was counseled on pursuing systemic therapy consisting of Brentuximab, Cyclophosphamide, Doxorubicin, and Prednisone given the concern for sALCL. Apropos of the patient we searched for clinically evident, cutaneous ALK-positive ALCL cases, with and without systemic involvement, in the English literature. Risk factors, such as tumor location, number, size, ALK localization, ALK translocations, and recurrence, were evaluated in cases of cutaneous ALK-positive ALCL. The majority of patients with cutaneous ALK-positive ALCL did not progress to systemic disease. The majority of cases that progressed to systemic disease in adults had recurring skin lesions and cytoplasmic localization of ALK. ALK translocations did not influence disease progression. Mean time to disease progression was 16.7 months, and significant mortality (50%) was observed in those cases that progressed to systemic disease. Pediatric cases did not exhibit a trend similar to adult cases. In both the adult and pediatric cases, a subset of cutaneous-limited ALK-positive ALCL were treated with chemotherapy. All cases treated with chemotherapy did not progress to systemic disease. Apropos of an ALK-positive ALCL patient with clinical cutaneous limited disease in the histologic presence of systemic markers, we discussed the literature data, highlighting the crucial issues related to developing a clinical strategy to approach this rare subtype of ALCL. Physicians need to be aware of the overall spectrum of ALCL, including cutaneous limited disease, systemic disease, disease with NPM-ALK translocation, disease with ALK and EMA positivity, and disease with skin recurrence.

Keywords: anaplastic large cell lymphoma, systemic, cutaneous, anaplastic lymphoma kinase, ALK, ALCL, sALCL, pcALCL, cALCL

Procedia PDF Downloads 58
134 Supplementing Aerial-Roving Surveys with Autonomous Optical Cameras: A High Temporal Resolution Approach to Monitoring and Estimating Effort within a Recreational Salmon Fishery in British Columbia, Canada

Authors: Ben Morrow, Patrick O'Hara, Natalie Ban, Tunai Marques, Molly Fraser, Christopher Bone

Abstract:

Relative to commercial fisheries, recreational fisheries are often poorly understood and pose various challenges for monitoring frameworks. In British Columbia (BC), Canada, Pacific salmon are heavily targeted by recreational fishers while also being a key source of nutrient flow and crucial prey for a variety of marine and terrestrial fauna, including endangered Southern Resident killer whales (Orcinus orca). Although commercial fisheries were historically responsible for the majority of salmon retention, recreational fishing now comprises both greater effort and retention. The current monitoring scheme for recreational salmon fisheries involves aerial-roving creel surveys. However, this method has been identified as costly and having low predictive power as it is often limited to sampling fragments of fluid and temporally dynamic fisheries. This study used imagery from two shore-based autonomous cameras in a highly active recreational fishery around Sooke, BC, and evaluated their efficacy in supplementing existing aerial-roving surveys for monitoring a recreational salmon fishery. This study involved continuous monitoring and high temporal resolution (over one million images analyzed in a single fishing season), using a deep learning-based vessel detection algorithm and a custom image annotation tool to efficiently thin datasets. This allowed for the quantification of peak-season effort from a busy harbour, species-specific retention estimates, high levels of detected fishing events at a nearby popular fishing location, as well as the proportion of the fishery management area represented by cameras. Then, this study demonstrated how it could substantially enhance the temporal resolution of a fishery through diel activity pattern analyses, scaled monthly to visualize clusters of activity. This work also highlighted considerable off-season fishing detection, currently unaccounted for in the existing monitoring framework. These results demonstrate several distinct applications of autonomous cameras for providing enhanced detail currently unavailable in the current monitoring framework, each of which has important considerations for the managerial allocation of resources. Further, the approach and methodology can benefit other studies that apply shore-based camera monitoring, supplement aerial-roving creel surveys to improve fine-scale temporal understanding, inform the optimal timing of creel surveys, and improve the predictive power of recreational stock assessments to preserve important and endangered fish species.

Keywords: cameras, monitoring, recreational fishing, stock assessment

Procedia PDF Downloads 88
133 Linear Evolution of Compressible Görtler Vortices Subject to Free-Stream Vortical Disturbances

Authors: Samuele Viaro, Pierre Ricco

Abstract:

Görtler instabilities generate in boundary layers from an unbalance between pressure and centrifugal forces caused by concave surfaces. Their spatial streamwise evolution influences transition to turbulence. It is therefore important to understand even the early stages where perturbations, still small, grow linearly and could be controlled more easily. This work presents a rigorous theoretical framework for compressible flows using the linearized unsteady boundary region equations, where only the streamwise pressure gradient and streamwise diffusion terms are neglected from the full governing equations of fluid motion. Boundary and initial conditions are imposed through an asymptotic analysis in order to account for the interaction of the boundary layer with free-stream turbulence. The resulting parabolic system is discretize with a second-order finite difference scheme. Realistic flow parameters are chosen from wind tunnel studies performed at supersonic and subsonic conditions. The Mach number ranges from 0.5 to 8, with two different radii of curvature, 5 m and 10 m, frequencies up to 2000 Hz, and vortex spanwise wavelengths from 5 mm to 20 mm. The evolution of the perturbation flow is shown through velocity, temperature, pressure profiles relatively close to the leading edge, where non-linear effects can still be neglected, and growth rate. Results show that a global stabilizing effect exists with the increase of Mach number, frequency, spanwise wavenumber and radius of curvature. In particular, at high Mach numbers curvature effects are less pronounced and thermal streaks become stronger than velocity streaks. This increase of temperature perturbations saturates at approximately Mach 4 flows, and is limited in the early stage of growth, near the leading edge. In general, Görtler vortices evolve closer to the surface with respect to a flat plate scenario but their location shifts toward the edge of the boundary layer as the Mach number increases. In fact, a jet-like behavior appears for steady vortices having small spanwise wavelengths (less than 10 mm) at Mach 8, creating a region of unperturbed flow close to the wall. A similar response is also found at the highest frequency considered for a Mach 3 flow. Larger vortices are found to have a higher growth rate but are less influenced by the Mach number. An eigenvalue approach is also employed to study the amplification of the perturbations sufficiently downstream from the leading edge. These eigenvalue results are compared with the ones obtained through the initial value approach with inhomogeneous free-stream boundary conditions. All of the parameters here studied have a significant influence on the evolution of the instabilities for the Görtler problem which is indeed highly dependent on initial conditions.

Keywords: compressible boundary layers, Görtler instabilities, receptivity, turbulence transition

Procedia PDF Downloads 231
132 Evaluation of the Incidence of Mycobacterium Tuberculosis Complex Associated with Soil, Hayfeed and Water in Three Agricultural Facilities in Amathole District Municipality in the Eastern Cape Province

Authors: Athini Ntloko

Abstract:

Mycobacterium bovis and other species of Mycobacterium tuberculosis complex (MTBC) can result to a zoonotic infection known as Bovine tuberculosis (bTB). MTBC has members that may contaminate an extensive range of hosts, including wildlife. Diverse wild species are known to cause disease in domestic livestock and are acknowledged as TB reservoirs. It has been a main study worldwide to deliberate on bTB risk factors as a result and some studies focused on particular parts of risk factors such as wildlife and herd management. The significance of the study was to determine the incidence of Mycobacterium tuberculosis complex that is associated with soil, hayfeed and water. Questionnaires were administered to thirty (30) smallholding farm owners in the two villages (kwaMasele and Qungqwala) and three (3) three commercial farms (Fort Hare dairy farm, Middledrift dairy farm and Seven star dairy farm). Detection of M. tuberculosis complex was achieved by Polymerase Chain Reaction using primers for IS6110; whereas a genotypic drug resistance mutation was detected using Genotype MTBDRplus assays. Nine percent (9%) of respondents had more than 40 cows in their herd, while 60% reported between 10 and 20 cows in their herd. Relationship between farm size and vaccination for TB differed from forty one percent (41%) being the highest to the least five percent (5%). The highest number of respondents who knew about relationship between TB cases and cattle location was ninety one percent (91%). Approximately fifty one percent (51%) of respondents had knowledge about wild life access to the farms. Relationship between import of cattle and farm size ranged from nine percent (9%) to thirty five percent (35%). Cattle sickness in relation to farm size differed from forty three (43%) being the highest to the least three percent (3%); while thirty three percent (33%) of respondents had knowledge about health management. Respondents with knowledge about the occurrence of TB infections in farms were forty-eight percent (48%). The frequency of DNA isolation from samples ranged from the highest forty-five percent (45%) from water to the least twenty two percent (22%) from soil. Fort Hare dairy farm had the highest number of positive samples, forty four percent (44%) from water samples; whereas Middledrift dairy farm had the lowest positive from water, seventeen percent (17%). Twelve (22%) out of 55 isolates showed resistance to INH and RIF that is, multi-drug resistance (MDR) and nine percent (9%) were sensitive to either INH or RIF. The mutations at rpoB gene differed from 58% being the highest to the least (23%). Fifty seven percent (57%) of samples showed a S315T1 mutation while only 14% possessed a S531L in the katG gene. The highest inhA mutations were detected in T8A (80 %) and the least was observed in A16G (17%). The results of this study reveal that risk factors for bTB in cattle and dairy farm workers are a serious issue abound in the Eastern Cape of South Africa; with the possibility of widespread dissemination of multidrug resistant determinants in MTBC from the environment.

Keywords: hayfeed, isoniazid, multi-drug resistance, mycobacterium tuberculosis complex, polymerase chain reaction, rifampicin, soil, water

Procedia PDF Downloads 310
131 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting

Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey

Abstract:

Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.

Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method

Procedia PDF Downloads 43
130 Valuing Cultural Ecosystem Services of Natural Treatment Systems Using Crowdsourced Data

Authors: Andrea Ghermandi

Abstract:

Natural treatment systems such as constructed wetlands and waste stabilization ponds are increasingly used to treat water and wastewater from a variety of sources, including stormwater and polluted surface water. The provision of ancillary benefits in the form of cultural ecosystem services makes these systems unique among water and wastewater treatment technologies and greatly contributes to determine their potential role in promoting sustainable water management practices. A quantitative analysis of these benefits, however, has been lacking in the literature. Here, a critical assessment of the recreational and educational benefits in natural treatment systems is provided, which combines observed public use from a survey of managers and operators with estimated public use as obtained using geotagged photos from social media as a proxy for visitation rates. Geographic Information Systems (GIS) are used to characterize the spatial boundaries of 273 natural treatment systems worldwide. Such boundaries are used as input for the Application Program Interfaces (APIs) of two popular photo-sharing websites (Flickr and Panoramio) in order to derive the number of photo-user-days, i.e., the number of yearly visits by individual photo users in each site. The adequateness and predictive power of four univariate calibration models using the crowdsourced data as a proxy for visitation are evaluated. A high correlation is found between photo-user-days and observed annual visitors (Pearson's r = 0.811; p-value < 0.001; N = 62). Standardized Major Axis (SMA) regression is found to outperform Ordinary Least Squares regression and count data models in terms of predictive power insofar as standard verification statistics – such as the root mean square error of prediction (RMSEP), the mean absolute error of prediction (MAEP), the reduction of error (RE), and the coefficient of efficiency (CE) – are concerned. The SMA regression model is used to estimate the intensity of public use in all 273 natural treatment systems. System type, influent water quality, and area are found to statistically affect public use, consistently with a priori expectations. Publicly available information regarding the home location of the sampled visitors is derived from their social media profiles and used to infer the distance they are willing to travel to visit the natural treatment systems in the database. Such information is analyzed using the travel cost method to derive monetary estimates of the recreational benefits of the investigated natural treatment systems. Overall, the findings confirm the opportunities arising from an integrated design and management of natural treatment systems, which combines the objectives of water quality enhancement and provision of cultural ecosystem services through public use in a multi-functional approach and compatibly with the need to protect public health.

Keywords: constructed wetlands, cultural ecosystem services, ecological engineering, waste stabilization ponds

Procedia PDF Downloads 157
129 Osteosuture in Fixation of Displaced Lateral Third Clavicle Fractures: A Case Report

Authors: Patrícia Pires, Renata Vaz, Bárbara Teles, Marco Pato, Pedro Beckert

Abstract:

Introduction: The management of lateral third clavicle fractures can be challenging due to difficulty in distinguishing subtle variations in the fracture pattern, which may be suggestive of potential fracture instability. They occur most often in men between 30 and 50 years of age, and in individuals over 70 years of age, its distribution is equal between both men and women. These fractures account for 10%–30% of all clavicle fractures and roughly 30%–45% of all clavicle nonunion fractures. Lateral third clavicle fractures may be treated conservatively or surgically, and there is no gold standard, although the risk of nonunion or pseudoarthrosis impacts the recommendation of surgical treatment when these fractures are unstable. There are many strategies for surgical treatment, including locking plates, hook plates fixation, coracoclavicular fixation using suture anchors, devices or screws, tension band fixation with suture or wire, transacromial Kirschner wire fixation and arthroscopically assisted techniques. Whenever taking the hardware into consideration, we must not disregard that obtaining adequate lateral fixation of small fragments is a difficult task, and plates are more associated to local irritation. The aim of the appropriate treatment is to ensure fracture healing and a rapid return to preinjury activities of daily living but, as explained, definitive treatment strategies have not been established and the variety of techniques avalilable add up to the discussion of this topic. Methods and Results: We present a clinical case of a 43-year-old man with the diagnosis of a lateral third clavicle fracture (Neer IIC) in the sequence of a fall on his right shoulder after a bicycle fall. He was operated three days after the injury, and through K-wire temporary fixation and indirect reduction using a ZipTight, he underwent osteosynthesis with an interfragmentary figure-of-eight tension band with polydioxanone suture (PDS). Two weeks later, there was a good aligment. He kept the sling until 6 weeks pos-op, avoiding efforts. At 7-weeks pos-op, there was still a good aligment, starting the physiotherapy exercises. After 10 months, he had no limitation in mobility or pain and returned to work with complete recovery in strength. Conclusion: Some distal clavicle fractures may be conservatively treated, but it is widely accepted that unstable fractures require surgical treatment to obtain superior clinical outcomes. In the clinical case presented, the authors chose an osteosuture technique due to the fracture pattern, its location. Since there isn´t a consensus on the prefered fixation method, it is important for surgeons to be skilled in various techniques and decide with their patient which approach is most appropriate for them, weighting the risk-benefit of each method. For instance, with the suture technique used, there is no wire migration or breakage, and it doesn´t require a reoperation for hardware removal; there is also less tissue exposure since it requires a smaller approach in comparison to the plate fixation and avoids cuff tears like the hook plate. The good clinical outcome on this case report serves the purpose of expanding the consideration of this method has a therapeutic option.

Keywords: lateral third, clavicle, suture, fixation

Procedia PDF Downloads 43
128 Hydrocarbons and Diamondiferous Structures Formation in Different Depths of the Earth Crust

Authors: A. V. Harutyunyan

Abstract:

The investigation results of rocks at high pressures and temperatures have revealed the intervals of changes of seismic waves and density, as well as some processes taking place in rocks. In the serpentinized rocks, as a consequence of dehydration, abrupt changes in seismic waves and density have been recorded. Hydrogen-bearing components are released which combine with carbon-bearing components. As a result, hydrocarbons formed. The investigated samples are smelted. Then, geofluids and hydrocarbons migrate into the upper horizons of the Earth crust by the deep faults. Then their differentiation and accumulation in the jointed rocks of the faults and in the layers with collecting properties takes place. Under the majority of the hydrocarbon deposits, at a certain depth, magmatic centers and deep faults are recorded. The investigation results of the serpentinized rocks with numerous geological-geophysical factual data allow understanding that hydrocarbons are mainly formed in both the offshore part of the ocean and at different depths of the continental crust. Experiments have also shown that the dehydration of the serpentinized rocks is accompanied by an explosion with the instantaneous increase in pressure and temperature and smelting the studied rocks. According to numerous publications, hydrocarbons and diamonds are formed in the upper part of the mantle, at the depths of 200-400km, and as a consequence of geodynamic processes, they rise to the upper horizons of the Earth crust through narrow channels. However, the genesis of metamorphogenic diamonds and the diamonds found in the lava streams formed within the Earth crust, remains unclear. As at dehydration, super high pressures and temperatures arise. It is assumed that diamond crystals are formed from carbon containing components present in the dehydration zone. It can be assumed that besides the explosion at dehydration, secondary explosions of the released hydrogen take place. The process is naturally accompanied by seismic phenomena, causing earthquakes of different magnitudes on the surface. As for the diamondiferous kimberlites, it is well-known that the majority of them are located within the ancient shield and platforms not obligatorily connected with the deep faults. The kimberlites are formed at the shallow location of dehydrated masses in the Earth crust. Kimberlites are younger in respect of containing ancient rocks containing serpentinized bazites and ultrbazites of relicts of the paleooceanic crust. Sometimes, diamonds containing water and hydrocarbons showing their simultaneous genesis are found. So, the geofluids, hydrocarbons and diamonds, according to the new concept put forward, are formed simultaneously from serpentinized rocks as a consequence of their dehydration at different depths of the Earth crust. Based on the concept proposed by us, we suggest discussing the following: -Genesis of gigantic hydrocarbon deposits located in the offshore area of oceans (North American, Mexican Gulf, Cuanza-Kamerunian, East Brazilian etc.) as well as in the continental parts of different mainlands (Kanadian-Arctic Caspian, East Siberian etc.) - Genesis of metamorphogenic diamonds and diamonds in the lava streams (Guinea-Liberian, Kokchetav, Kanadian, Kamchatka-Tolbachinian, etc.).

Keywords: dehydration, diamonds, hydrocarbons, serpentinites

Procedia PDF Downloads 315
127 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens

Authors: R. Tamborrino, F. Rinaudo

Abstract:

Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.

Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities

Procedia PDF Downloads 171
126 Culvert Blockage Evaluation Using Australian Rainfall And Runoff 2019

Authors: Rob Leslie, Taher Karimian

Abstract:

The blockage of cross drainage structures is a risk that needs to be understood and managed or lessened through the design. A blockage is a random event, influenced by site-specific factors, which needs to be quantified for design. Under and overestimation of blockage can have major impacts on flood risk and cost associated with drainage structures. The importance of this matter is heightened for those projects located within sensitive lands. It is a particularly complex problem for large linear infrastructure projects (e.g., rail corridors) located within floodplains where blockage factors can influence flooding upstream and downstream of the infrastructure. The selection of the appropriate blockage factors for hydraulic modeling has been subject to extensive research by hydraulic engineers. This paper has been prepared to review the current Australian Rainfall and Runoff 2019 (ARR 2019) methodology for blockage assessment by applying this method to a transport corridor brownfield upgrade case study in New South Wales. The results of applying the method are also validated against asset data and maintenance records. ARR 2019 – Book 6, Chapter 6 includes advice and an approach for estimating the blockage of bridges and culverts. This paper concentrates specifically on the blockage of cross drainage structures. The method has been developed to estimate the blockage level for culverts affected by sediment or debris due to flooding. The objective of the approach is to evaluate a numerical blockage factor that can be utilized in a hydraulic assessment of cross drainage structures. The project included an assessment of over 200 cross drainage structures. In order to estimate a blockage factor for use in the hydraulic model, a process has been advanced that considers the qualitative factors (e.g., Debris type, debris availability) and site-specific hydraulic factors that influence blockage. A site rating associated with the debris potential (i.e., availability, transportability, mobility) at each crossing was completed using the method outlined in ARR 2019 guidelines. The hydraulic results inputs (i.e., flow velocity, flow depth) and qualitative factors at each crossing were developed into an advanced spreadsheet where the design blockage level for cross drainage structures were determined based on the condition relating Inlet Clear Width and L10 (average length of the longest 10% of the debris reaching the site) and the Adjusted Debris Potential. Asset data, including site photos and maintenance records, were then reviewed and compared with the blockage assessment to check the validity of the results. The results of this assessment demonstrate that the estimated blockage factors at each crossing location using ARR 2019 guidelines are well-validated with the asset data. The primary finding of the study is that the ARR 2019 methodology is a suitable approach for culvert blockage assessment that has been validated against a case study spanning a large geographical area and multiple sub-catchments. The study also found that the methodology can be effectively coded within a spreadsheet or similar analytical tool to automate its application.

Keywords: ARR 2019, blockage, culverts, methodology

Procedia PDF Downloads 308
125 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm

Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam

Abstract:

The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.

Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction

Procedia PDF Downloads 111