Search results for: David D. Oliveira
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 945

Search results for: David D. Oliveira

195 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 92
194 Evaluation and Proposal for Improvement of the Flow Measurement Equipment in the Bellavista Drinking Water System of the City of Azogues

Authors: David Quevedo, Diana Coronel

Abstract:

The present article carries out an evaluation of the drinking water system in the Bellavista sector of the city of Azogues, with the purpose of determining the appropriate equipment to record the actual consumption flows of the inhabitants in said sector. Taking into account that the study area is located in a rural and economically disadvantaged area, there is an urgent need to establish a control system for the consumption of drinking water in order to conserve and manage the vital resource in the best possible way, considering that the water source supplying this sector is approximately 9km away. The research began with the collection of cartographic, demographic, and statistical data of the sector, determining the coverage area, population projection, and a provision that guarantees the supply of drinking water to meet the water needs of the sector's inhabitants. By using hydraulic modeling through the United States Environmental Protection Agency Application for Modeling Drinking Water Distribution Systems EPANET 2.0 software, theoretical hydraulic data were obtained, which were used to design and justify the most suitable measuring equipment for the Bellavista drinking water system. Taking into account a minimum service life of the drinking water system of 30 years, future flow rates were calculated for the design of the macro-measuring device. After analyzing the network, it was evident that the Bellavista sector has an average consumption of 102.87 liters per person per day, but considering that Ecuadorian regulations recommend a provision of 180 liters per person per day for the geographical conditions of the sector, this value was used for the analysis. With all the collected and calculated information, the conclusion was reached that the Bellavista drinking water system needs to have a 125mm electromagnetic macro-measuring device for the first three quinquenniums of its service life and a 150mm diameter device for the following three quinquenniums. The importance of having equipment that provides real and reliable data will allow for the control of water consumption by the population of the sector, measured through micro-measuring devices installed at the entrance of each household, which should match the readings of the macro-measuring device placed after the water storage tank outlet, in order to control losses that may occur due to leaks in the drinking water system or illegal connections.

Keywords: macrometer, hydraulics, endowment, water

Procedia PDF Downloads 58
193 Delineation of Green Infrastructure Buffer Areas with a Simulated Annealing: Consideration of Ecosystem Services Trade-Offs in the Objective Function

Authors: Andres Manuel Garcia Lamparte, Rocio Losada Iglesias, Marcos BoullóN Magan, David Miranda Barros

Abstract:

The biodiversity strategy of the European Union for 2030, mentions climate change as one of the key factors for biodiversity loss and considers green infrastructure as one of the solutions to this problem. In this line, the European Commission has developed a green infrastructure strategy which commits members states to consider green infrastructure in their territorial planning. This green infrastructure is aimed at granting the provision of a wide number of ecosystem services to support biodiversity and human well-being by countering the effects of climate change. Yet, there are not too many tools available to delimit green infrastructure. The available ones consider the potential of the territory to provide ecosystem services. However, these methods usually aggregate several maps of ecosystem services potential without considering possible trade-offs. This can lead to excluding areas with a high potential for providing ecosystem services which have many trade-offs with other ecosystem services. In order to tackle this problem, a methodology is proposed to consider ecosystem services trade-offs in the objective function of a simulated annealing algorithm aimed at delimiting green infrastructure multifunctional buffer areas. To this end, the provision potential maps of the regulating ecosystem services considered to delimit the multifunctional buffer areas are clustered in groups, so that ecosystem services that create trade-offs are excluded in each group. The normalized provision potential maps of the ecosystem services in each group are added to obtain a potential map per group which is normalized again. Then the potential maps for each group are combined in a raster map that shows the highest provision potential value in each cell. The combined map is then used in the objective function of the simulated annealing algorithm. The algorithm is run both using the proposed methodology and considering the ecosystem services individually. The results are analyzed with spatial statistics and landscape metrics to check the number of ecosystem services that the delimited areas produce, as well as their regularity and compactness. It has been observed that the proposed methodology increases the number of ecosystem services produced by delimited areas, improving their multifunctionality and increasing their effectiveness in preventing climate change impacts.

Keywords: ecosystem services trade-offs, green infrastructure delineation, multifunctional buffer areas, climate change

Procedia PDF Downloads 151
192 Rheolaser: Light Scattering Characterization of Viscoelastic Properties of Hair Cosmetics That Are Related to Performance and Stability of the Respective Colloidal Soft Materials

Authors: Heitor Oliveira, Gabriele De-Waal, Juergen Schmenger, Lynsey Godfrey, Tibor Kovacs

Abstract:

Rheolaser MASTER™ makes use of multiple scattering of light, caused by scattering objects in a continuous medium (such as droplets and particles in colloids), to characterize the viscoelasticity of soft materials. It offers an alternative to conventional rheometers to characterize viscoelasticity of products such as hair cosmetics. Up to six simultaneous measurements at controlled temperature can be carried out simultaneously (10-15 min), and the method requires only minor sample preparation work. Conversely to conventional rheometer based methods, no mechanical stress is applied to the material during the measurements. Therefore, the properties of the exact same sample can be monitored over time, like in aging and stability studies. We determined the elastic index (EI) of water/emulsion mixtures (1 ≤ fat alcohols (FA) ≤ 5 wt%) and emulsion/gel-network mixtures (8 ≤ FA ≤ 17 wt%) and compared with the elastic/sorage mudulus (G’) for the respective samples using a TA conventional rheometer with flat plates geometry. As expected, it was found that log(EI) vs log(G’) presents a linear behavior. Moreover, log(EI) increased in a linear fashion with solids level in the entire range of compositions (1 ≤ FA ≤ 17 wt%), while rheometer measurements were limited to samples down to 4 wt% solids level. Alternatively, a concentric cilinder geometry would be required for more diluted samples (FA > 4 wt%) and rheometer results from different sample holder geometries are not comparable. The plot of the rheolaser output parameters solid-liquid balance (SLB) vs EI were suitable to monitor product aging processes. These data could quantitatively describe some observations such as formation of lumps over aging time. Moreover, this method allowed to identify that the different specifications of a key raw material (RM < 0.4 wt%) in the respective gel-network (GN) product has minor impact on product viscoelastic properties and it is not consumer perceivable after a short aging time. Broadening of a RM spec range typically has a positive impact on cost savings. Last but not least, the photon path length (λ*)—proportional to droplet size and inversely proportional to volume fraction of scattering objects, accordingly to the Mie theory—and the EI were suitable to characterize product destabilization processes (e.g., coalescence and creaming) and to predict product stability about eight times faster than our standard methods. Using these parameters we could successfully identify formulation and process parameters that resulted in unstable products. In conclusion, Rheolaser allows quick and reliable characterization of viscoelastic properties of hair cosmetics that are related to their performance and stability. It operates in a broad range of product compositions and has applications spanning from the formulation of our hair cosmetics to fast release criteria in our production sites. Last but not least, this powerful tool has positive impact on R&D development time—faster delivery of new products to the market—and consequently on cost savings.

Keywords: colloids, hair cosmetics, light scattering, performance and stability, soft materials, viscoelastic properties

Procedia PDF Downloads 155
191 Media, Myth and Hero: Sacred Political Narrative in Semiotic and Anthropological Analysis

Authors: Guilherme Oliveira

Abstract:

The assimilation of images and their potential symbolism into lived experiences is inherent. It is through this exercise of recognition via imagistic records that the questioning of the origins of a constant narrative stimulated by the media arises. The construction of the "Man" archetype and the reflections of active masculine imagery in the 21st century, when conveyed through media channels, could potentially have detrimental effects. Addressing this systematic behavioral chronology of virile cisgender, permeated imagistically through these means, involves exploring potential resolutions. Thus, an investigation process is initiated into the potential representation of the 'hero' in this media emulation through idols contextualized in the political sphere, with the purpose of elucidating the processes of simulation and emulation of narratives based on mythical, historical, and sacred accounts. In this process of sharing, the narratives contained in the imagistic structuring offered by information dissemination channels seek validation through a process of public acceptance. To achieve this consensus, a visual set adorned with mythological and sacred symbolisms adapted to the intended environment is promoted, thus utilizing sociocultural characteristics in favor of political marketing. Visual recognition, therefore, becomes a direct reflection of a cultural heritage acquired through lived human experience, stimulated by continuous representations throughout history. Echoes of imagery and narratives undergo a constant process of resignification of their concepts, sharpened by their premises, and adapted to the environment in which they seek to establish themselves. Political figures analyzed in this article employ the practice of taking possession of symbolisms, mythological stories, and heroisms and adapt their visual construction through a continuous praxis of emulation. Thus, they utilize iconic mythological narratives to gain credibility through belief. Utilizing iconic mythological narratives for credibility through belief, the idol becomes the very act of release of trauma, offering believers liberation from preconceived concepts and allowing for the attribution of new meanings. To dissolve this issue and highlight the subjectivities within the intention of the image, a linguistic, semiotic, and anthropological methodology is created. Linguistics uses expressions like 'Blaming the Image' to create a mechanism of expressive action in questioning why to blame a construction or visual composition and thus seek answers in the first act. Semiotics and anthropology develop an imagistic atlas of graphic analysis, seeking to make connections, comparisons, and relations between modern and sacred/mystical narratives, emphasizing the different subjective layers of embedded symbolism. Thus, it constitutes a performative act of disarming the image. It creates a disenchantment of the superficial gaze under the constant reproduction of visual content stimulated by virtual networks, enabling a discussion about the acceptance of caricatures characterized by past fables.

Keywords: image, heroic narrative, media heroism, virile politics, political, myth, sacred performance, visual mythmaking, characterization dynamics

Procedia PDF Downloads 35
190 Controlled Digital Lending, Equitable Access to Knowledge and Future Library Services

Authors: Xuan Pang, Alvin L. Lee, Peggy Glatthaar

Abstract:

Libraries across the world have been an innovation engine of creativity and opportunityin many decades. The on-going global epidemiology outbreak and health crisis experience illuminates potential reforms, rethinking beyond traditional library operations and services. Controlled Digital Lending (CDL) is one of the emerging technologies libraries used to deliver information digitally in support of online learning and teachingand make educational materials more affordable and more accessible. CDL became a popular term in the United States of America (USA) as a result of a white paper authored by Kyle K. Courtney (Harvard University) and David Hansen (Duke University). The paper gave the legal groundwork to explore CDL: Fair Use, First Sale Doctrine, and Supreme Court rulings. Library professionals implemented this new technology to fulfill their users’ needs. Three libraries in the state of Florida (University of Florida, Florida Gulf Coast University, and Florida A&M University) started a conversation about how to develop strategies to make CDL work possible at each institution. This paper shares the stories of piloting and initiating a CDL program to ensure students have reliable, affordable access to course materials they need to be successful. Additionally, this paper offers an overview of the emerging trends of Controlled Digital Lending in the USA and demonstrates the development of the CDL platforms, policies, and implementation plans. The paper further discusses challenges and lessons learned and how each institution plans to sustain the program into future library services. The fundamental mission of the library is providing users unrestricted access to library resources regardless of their physical location, disability, health status, or other circumstances. The professional due diligence of librarians, as information professionals, is to makeeducational resources more affordable and accessible.CDL opens a new frontier of library services as a mechanism for library practice to enhance user’s experience of using libraries’ services. Libraries should consider exploring this tool to distribute library resources in an effective and equitable way. This new methodology has potential benefits to libraries and end users.

Keywords: controlled digital lending, emerging technologies, equitable access, collaborations

Procedia PDF Downloads 119
189 Microscale observations of a gas cell wall rupture in bread dough during baking and confrontation to 2/3D Finite Element simulations of stress concentration

Authors: Kossigan Bernard Dedey, David Grenier, Tiphaine Lucas

Abstract:

Bread dough is often described as a dispersion of gas cells in a continuous gluten/starch matrix. The final bread crumb structure is strongly related to gas cell walls (GCWs) rupture during baking. At the end of proofing and during baking, part of the thinnest GCWs between expanding gas cells is reduced to a gluten film of about the size of a starch granule. When such size is reached gluten and starch granules must be considered as interacting phases in order to account for heterogeneities and appropriately describe GCW rupture. Among experimental investigations carried out to assess GCW rupture, no experimental work was performed to observe the GCW rupture in the baking conditions at GCW scale. In addition, attempts to numerically understand GCW rupture are usually not performed at the GCW scale and often considered GCWs as continuous. The most relevant paper that accounted for heterogeneities dealt with the gluten/starch interactions and their impact on the mechanical behavior of dough film. However, stress concentration in GCW was not discussed. In this study, both experimental and numerical approaches were used to better understand GCW rupture in bread dough during baking. Experimentally, a macro-scope placed in front of a two-chamber device was used to observe the rupture of a real GCW of 200 micrometers in thickness. Special attention was paid in order to mimic baking conditions as far as possible (temperature, gas pressure and moisture). Various differences in pressure between both sides of GCW were applied and different modes of fracture initiation and propagation in GCWs were observed. Numerically, the impact of gluten/starch interactions (cohesion or non-cohesion) and rheological moduli ratio on the mechanical behavior of GCW under unidirectional extension was assessed in 2D/3D. A non-linear viscoelastic and hyperelastic approach was performed to match the finite strain involved in GCW during baking. Stress concentration within GCW was identified. Simulated stresses concentration was discussed at the light of GCW failure observed in the device. The gluten/starch granule interactions and rheological modulus ratio were found to have a great effect on the amount of stress possibly reached in the GCW.

Keywords: dough, experimental, numerical, rupture

Procedia PDF Downloads 108
188 Saving the Decolonized Subject from Neglected Tropical Diseases: Public Health Campaign and Household-Centred Sanitation in Colonial West Africa, 1900-1960

Authors: Adebisi David Alade

Abstract:

In pre-colonial West Africa, the deadliness of the climate vis-a- vis malaria and other tropical diseases to Europeans turned the region into the “white man’s grave.” Thus, immediately after the partition of Africa in 1885, civilisatrice and mise en valeur not only became a pretext for the establishment of colonial rule; from a medical point of view, the control and possible eradication of disease in the continent emerged as one of the first concerns of the European colonizers. Though geared toward making Africa exploitable, historical evidence suggests that some colonial Water, Sanitation and Hygiene (WASH) policies and projects reduced certain tropical diseases in some West African communities. Exploring some of these disease control interventions by way of historical revisionism, this paper challenges the orthodox interpretation of colonial sanitation and public health measures in West Africa. This paper critiques the deployment of race and class as analytical tools for the study of colonial WASH projects, an exercise which often reduces the complexity and ambiguity of colonialism to the binary of colonizer and the colonized. Since West Africa presently ranks high among regions with Neglected Tropical Diseases (NTDs), it is imperative to decentre colonial racism and economic exploitation in African history in order to give room for Africans to see themselves in other ways. Far from resolving the problem of NTDs by fiat in the region, this study seeks to highlight important blind spots in African colonial history in an attempt to prevent post-colonial African leaders from throwing away the baby with the bath water. As scholars researching colonial sanitation and public health in the continent rarely examine its complex meaning and content, this paper submits that the outright demonization of colonial rule across space and time continues to build ideological wall between the present and the past which not only inhibit fruitful borrowing from colonial administration of West Africa, but also prevents a wide understanding of the challenges of WASH policies and projects in most West African states.

Keywords: colonial rule, disease control, neglected tropical diseases, WASH

Procedia PDF Downloads 166
187 Re-Examining the Distinction between Odour Nuisance and Health Impact: A Community’s Campaign against Landfill Gas Exposure in Shongweni, South Africa

Authors: Colin David La Grange, Lisa Frost Ramsay

Abstract:

Hydrogen sulphide (H2S) is a minor component of landfill gas, but significant in its distinct odorous quality and its association with landfill-related community complaints. The World Health Organisation (WHO) provides two guidelines for H2S: a health guideline at 150 µg/m3 on a 24-hour average, and a nuisance guideline at 7 µg/m3 on a 30-minute average. Albeit a practical distinction for impact assessment, this paper highlights the danger of the apparent dualism between nuisance and health impact, particularly when it is used to dismiss community concerns of perceived health impacts at low concentrations of H2S, as in the case of a community battle against the impacts of a landfill in Shongweni, KwaZulu-Natal, South Africa. Here community members reported, using a community developed mobile phone application, a range of health symptoms that coincided with, or occurred subsequent to, odour events and localised H2S peaks. Local doctors also documented increased visits for symptoms of respiratory distress, eye and skin irritation, and stress after such odour events. Objectively measured H2S and other pollutant concentrations during these events, however, remained below WHO health guidelines. This case study highlights the importance of the physiological link between the experience of environmental nuisance and overall health and wellbeing, showing these to be less distinct than the WHO guidelines would suggest. The potential mechanisms of impact of an odorous plume, with key constituents at concentrations below traditional health thresholds, on psychologically and/or physiologically sensitised individuals are described. In the case of psychological sensitisation, previously documented mechanisms such as aversive conditioning and odour-triggered panic are relevant. Physiological sensitisation to environmental pollutants, evident as a seemingly disproportionate physical (allergy-type) response to either low concentrations or a short duration exposure of a toxin or toxins, remains extensively examined but still not well understood. The links between a heightened sensitivity to toxic compounds, accumulation of some compounds in the body, and a pre-existing or associated immunological stress disorder are presented as a possible explanation.

Keywords: immunological stress disorder, landfill odour, odour nuisance, odour sensitisation, toxin accumulation

Procedia PDF Downloads 108
186 The Routine Use of a Negative Pressure Incision Management System in Vascular Surgery: A Case Series

Authors: Hansraj Bookun, Angela Tan, Rachel Xuan, Linheng Zhao, Kejia Wang, Animesh Singla, David Kim, Christopher Loupos

Abstract:

Introduction: Incisional wound complications in vascular surgery patients represent a significant clinical and econometric burden of morbidity and mortality. The objective of this study was to trial the feasibility of applying the Prevena negative pressure incision management system as a routine dressing in patients who had undergone arterial surgery. Conventionally, Prevena has been applied to groin incisions, but this study features applications on multiple wound sites such as the thigh or major amputation stumps. Method: This was a cross-sectional observational, single-centre case series of 12 patients who had undergone major vascular surgery. Their wounds were managed with the Prevena system being applied either intra-operatively or on the first post-operative day. Demographic and operative details were collated as well as the length of stay and complication rates. Results: There were 9 males (75%) with mean age of 66 years and the comorbid burden was as follows: ischaemic heart disease (92%), diabetes (42%), hypertension (100%), stage 4 or greater kidney impairment (17%) and current or ex-smoking (83%). The main indications were acute ischaemia (33%), claudication (25%), and gangrene (17%). There were single instances of an occluded popliteal artery aneurysm, diabetic foot infection, and rest pain. The majority of patients (50%) had hybrid operations with iliofemoral endarterectomies, patch arterioplasties, and further peripheral endovascular treatment. There were 4 complex arterial bypass operations and 2 major amputations. The mean length of stay was 17 ± 10 days, with a range of 4 to 35 days. A single complication, in the form of a lymphocoele, was encountered in the context of an iliofemoral endarterectomy and patch arterioplasty. This was managed conservatively. There were no deaths. Discussion: The Prevena wound management system shows that in conjunction with safe vascular surgery, absolute wound complication rates remain low and that it remains a valuable adjunct in the treatment of vasculopaths.

Keywords: wound care, negative pressure, vascular surgery, closed incision

Procedia PDF Downloads 113
185 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 56
184 Developing Social Responsibility Values in Nascent Entrepreneurs through Role-Play: An Explorative Study of University Students in the United Kingdom

Authors: David W. Taylor, Fernando Lourenço, Carolyn Branston, Paul Tucker

Abstract:

There are an increasing number of students at Universities in the United Kingdom engaging in entrepreneurship role-play to explore business start-up as a career alternative to employment. These role-play activities have been shown to have a positive influence on students’ entrepreneurial intentions. Universities also play a role in developing graduates’ awareness of social responsibility. However, social responsibility is often missing from these entrepreneurship role-plays. It is important that these role-play activities include the development of values that support social responsibility, in-line with those running hybrid, humane and sustainable enterprises, and not simply focus on profit. The Young Enterprise (YE) Start-Up programme is an example of a role-play activity that is gaining in popularity amongst United Kingdom Universities seeking ways to give students insight into a business start-up. A Post-92 University in the North-West of England has adapted the traditional YE Directorship roles (e.g., Marketing Director, Sales Director) by including a Corporate Social Responsibility (CSR) Director in all of the team-based YE Start-Up businesses. The aim for introducing this Directorship was to observe if such a role would help create a more socially responsible value-system within each company and in turn shape business decisions. This paper investigates role-play as a tool to help enterprise educators develop socially responsible attitudes and values in nascent entrepreneurs. A mixed qualitative methodology approach has been used, which includes interviews, role-play, and reflection, to help students develop positive value characteristics through the exploration of unethical and selfish behaviors. The initial findings indicate that role-play helped CSR Directors learn and gain insights into the importance of corporate social responsibility, influenced the values and actions of their YE Start-Ups, and increased the likelihood that if the participants were to launch a business post-graduation, that the intent would be for the business to be socially responsible. These findings help inform educators on how to develop socially responsible nascent entrepreneurs within a traditionally profit orientated business model.

Keywords: student entrepreneurship, young enterprise, social responsibility, role-play, values

Procedia PDF Downloads 131
183 Pakistan’s Counterinsurgency Operations: A Case Study of Swat

Authors: Arshad Ali

Abstract:

The Taliban insurgency in Swat which started apparently as a social movement in 2004 transformed into an anti-Pakistan Islamist insurgency by joining hands with the Tehrik-e-Taliban Pakistan (TTP) upon its formation in 2007. It quickly spread beyond Swat by 2009 making Swat the second stronghold of TTP after FATA. It prompted the Pakistan military to launch a full-scale counterinsurgency military operation code named Rah-i-Rast to regain the control of Swat. Operation Rah-i-Rast was successful not only in restoring the writ of the State but more importantly in creating a consensus against the spread of Taliban insurgency in Pakistan at political, social and military levels. This operation became a test case for civilian government and military to seek for a sustainable solution combating the TTP insurgency in the north-west of Pakistan. This study analyzes why the counterinsurgency operation Rah-i-Rast was successful and why the previous ones came into failure. The study also explores factors which created consensus against the Taliban insurgency at political and social level as well as reasons which hindered such a consensual approach in the past. The study argues that the previous initiatives failed due to various factors including Pakistan army’s lack of comprehensive counterinsurgency model, weak political will and public support, and states negligence. Also, the initial counterinsurgency policies were ad-hoc in nature fluctuating between military operations and peace deals. After continuous failure, the military revisited its approach to counterinsurgency in the operation Rah-i-Rast. The security forces learnt from their past experiences and developed a pragmatic counterinsurgency model: ‘clear, hold, build, and transfer.’ The military also adopted the population-centric approach to provide security to the local people. This case Study of Swat evaluates the strengths and weaknesses of the Pakistan's counterinsurgency operations as well as peace agreements. It will analyze operation Rah-i-Rast in the light of David Galula’s model of counterinsurgency. Unlike existing literature, the study underscores the bottom up approach adopted by the Pakistan’s military and government by engaging the local population to sustain the post-operation stability in Swat. More specifically, the study emphasizes on the hybrid counterinsurgency model “clear, hold, and build and Transfer” in Swat.

Keywords: Insurgency, Counterinsurgency, clear, hold, build, transfer

Procedia PDF Downloads 338
182 Explosion Mechanics of Aluminum Plates Subjected to the Combined Effect of Blast Wave and Fragment Impact Loading: A Multicase Computational Modeling Study

Authors: Atoui Oussama, Maazoun Azer, Belkassem Bachir, Pyl Lincy, Lecompte David

Abstract:

For many decades, researchers have been focused on understanding the dynamic behavior of different structures and materials subjected to fragment impact or blast loads separately. The explosion mechanics, as well as the impact physics studies dealing with the numerical modeling of the response of protective structures under the synergistic effect of a blast wave and the impact of fragments, are quite limited in the literature. This article numerically evaluates the nonlinear dynamic behavior and damage mechanisms of Aluminum plates EN AW-1050A- H24 under different combined loading scenarios varied by the sequence of the applied loads using the commercial software LS-DYNA. For one hand, with respect to the terminal ballistic field investigations, a Lagrangian (LAG) formulation is used to evaluate the different failure modes of the target material in case of a fragment impact. On the other hand, with respect to the blast field analysis, an Arbitrary Lagrangian-Eulerian (ALE) formulation is considered to study the fluid-structure interaction (FSI) of the shock wave and the plate in case of a blast loading. Four different loading scenarios are considered: (1) only blast loading, (2) only fragment impact, (3) blast loading followed by a fragment impact and (4) a fragment impact followed by blast loading. From the numerical results, it was observed that when the impact load is applied to the plate prior to the blast load, it suffers more severe damage due to the hole enlargement phenomenon and the effects of crack propagation on the circumference of the damaged zone. Moreover, it was found that the hole from the fragment impact loading was enlarged to about three times in diameter as compared to the diameter of the projectile. The validation of the proposed computational model is based in part on previous experimental data obtained by the authors and in the other part on experimental data obtained from the literature. A good correspondence between the numerical and experimental results is found.

Keywords: computational analysis, combined loading, explosion mechanics, hole enlargement phenomenon, impact physics, synergistic effect, terminal ballistic

Procedia PDF Downloads 163
181 Biophysical Assessment of the Ecological Condition of Wetlands in the Parkland and Grassland Natural Regions of Alberta, Canada

Authors: Marie-Claude Roy, David Locky, Ermias Azeria, Jim Schieck

Abstract:

It is estimated that up to 70% of the wetlands in the Parkland and Grassland natural regions of Alberta have been lost due to various land-use activities. These losses include ecosystem function and services they once provided. Those wetlands remaining are often embedded in a matrix of human-modified habitats and despite efforts taken to protect them the effects of land-uses on wetland condition and function remain largely unknown. We used biophysical field data and remotely-sensed human footprint data collected at 322 open-water wetlands by the Alberta Biodiversity Monitoring Institute (ABMI) to evaluate the impact of surrounding land use on the physico-chemistry characteristics and plant functional traits of wetlands. Eight physio-chemistry parameters were assessed: wetland water depth, water temperature, pH, salinity, dissolved oxygen, total phosphorus, total nitrogen, and dissolved organic carbon. Three plant functional traits were evaluated: 1) origin (native and non-native), 2) life history (annual, biennial, and perennial), and 3) habitat requirements (obligate-wetland and obligate-upland). Intensity land-use was quantified within a 250-meter buffer around each wetland. Ninety-nine percent of wetlands in the Grassland and Parkland regions of Alberta have land-use activities in their surroundings, with most being agriculture-related. Total phosphorus in wetlands increased with the cover of surrounding agriculture, while salinity, total nitrogen, and dissolved organic carbon were positively associated with the degree of soft-linear (e.g. pipelines, trails) land-uses. The abundance of non-native and annual/biennial plants increased with the amount of agriculture, while urban-industrial land-use lowered abundance of natives, perennials, and obligate wetland plants. Our study suggests that land-use types surrounding wetlands affect the physicochemical and biological conditions of wetlands. This research suggests that reducing human disturbances through reclamation of wetland buffers may enhance the condition and function of wetlands in agricultural landscapes.

Keywords: wetlands, biophysical assessment, land use, grassland and parkland natural regions

Procedia PDF Downloads 315
180 Occupational Challenges and Adjustment Strategies of Internally Displaced Persons in Abuja, Nigeria

Authors: David Obafemi Adebayo

Abstract:

An occupational challenge has been identified as one of the factors that could cripple set goals and life ambitions of an Internally Displaced Person (IDP). The main thrust of this study is therefore, explore the use of life support/adjustment strategy with a view to repositioning these internally displaced persons in Nigeria in revamping their goals and achieving their life-long ambitions. The study intends to investigate whether there exist, on the basis of gender, religion, years of working experience and educational qualification any significant difference in the occupational challenges and adjustment strategies of IDPs. The study being descriptive of survey type adopted a multi-stage sampling technique to select the minimum of 400 internally displaced persons from IDP camps in Yimitu Village, Waru District in the Federal Capital Territory (FCT), Abuja. The research instrument used for the study was a researcher-designed questionnaire entitled “Questionnaire on Occupational Challenges and Adjustment Strategy of Internally Displaced Persons (QOCASIDPs)”. Eight null hypotheses were tested at 0.05 alpha levels of significance. Frequency counts and percentages, means and rank order, t-test, Analysis of Variance (ANOVA) and Duncan Multiple Range Test (DMRT) (where applicable) were employed to analyze the data. The Study determined whether occupational challenges of internally displaced persons included loss of employment, vocational discrimination, marginalization by employers of labour, isolation due to joblessness, lack of occupational freedom, which were found to be true. The results were discussed in line with the findings. The study established the place of notable adjustment strategies adopted by internally displaced person like engaging in petty trading, sourcing soft loans from NGOs, setting up small-scale businesses in groups, acquiring new skills, engaging in further education, among others. The study established that there was no significant difference in the occupational challenges of IDPs on the basis of years of working experience and highest educational qualifications, though there was significant difference on the basis of gender as well as religion. Based on the findings of the study, recommendations were made.

Keywords: internally displaced persons, occupational challenges, adjustment strategies, Abuja-Nigeria

Procedia PDF Downloads 341
179 MAOD Is Estimated by Sum of Contributions

Authors: David W. Hill, Linda W. Glass, Jakob L. Vingren

Abstract:

Maximal accumulated oxygen deficit (MAOD), the gold standard measure of anaerobic capacity, is the difference between the oxygen cost of exhaustive severe intensity exercise and the accumulated oxygen consumption (O2; mL·kg–1). In theory, MAOD can be estimated as the sum of independent estimates of the phosphocreatine and glycolysis contributions, which we refer to as PCr+glycolysis. Purpose: The purpose was to test the hypothesis that PCr+glycolysis provides a valid measure of anaerobic capacity in cycling and running. Methods: The participants were 27 women (mean ± SD, age 22 ±1 y, height 165 ± 7 cm, weight 63.4 ± 9.7 kg) and 25 men (age 22 ± 1 y, height 179 ± 6 cm, weight 80.8 ± 14.8 kg). They performed two exhaustive cycling and running tests, at speeds and work rates that were tolerable for ~5 min. The rate of oxygen consumption (VO2; mL·kg–1·min–1) was measured in warmups, in the tests, and during 7 min of recovery. Fingerprick blood samples obtained after exercise were analysed to determine peak blood lactate concentration (PeakLac). The VO2 response in exercise was fitted to a model, with a fast ‘primary’ phase followed by a delayed ‘slow’ component, from which was calculated the accumulated O2 and the excess O2 attributable to the slow component. The VO2 response in recovery was fitted to a model with a fast phase and slow component, sharing a common time delay. Oxygen demand (in mL·kg–1·min–1) was determined by extrapolation from steady-state VO2 in warmups; the total oxygen cost (in mL·kg–1) was determined by multiplying this demand by time to exhaustion and adding the excess O2; then, MAOD was calculated as total oxygen cost minus accumulated O2. The phosphocreatine contribution (area under the fast phase of the post-exercise VO2) and the glycolytic contribution (converted from PeakLac) were summed to give PCr+glycolysis. There was not an interaction effect involving sex, so values for anaerobic capacity were examined using a two-way ANOVA, with repeated measures across method (PCr+glycolysis vs MAOD) and mode (cycling vs running). Results: There was a significant effect only for exercise mode. There was no difference between MAOD and PCr+glycolysis: values were 59 ± 6 mL·kg–1 and 61 ± 8 mL·kg–1 in cycling and 78 ± 7 mL·kg–1 and 75 ± 8 mL·kg–1 in running. Discussion: PCr+glycolysis is a valid measure of anaerobic capacity in cycling and running, and it is as valid for women as for men.

Keywords: alactic, anaerobic, cycling, ergometer, glycolysis, lactic, lactate, oxygen deficit, phosphocreatine, running, treadmill

Procedia PDF Downloads 116
178 Propagation of Ultra-High Energy Cosmic Rays through Extragalactic Magnetic Fields: An Exploratory Study of the Distance Amplification from Rectilinear Propagation

Authors: Rubens P. Costa, Marcelo A. Leigui de Oliveira

Abstract:

The comprehension of features on the energy spectra, the chemical compositions, and the origins of Ultra-High Energy Cosmic Rays (UHECRs) - mainly atomic nuclei with energies above ~1.0 EeV (exa-electron volts) - are intrinsically linked to the problem of determining the magnitude of their deflections in cosmic magnetic fields on cosmological scales. In addition, as they propagate from the source to the observer, modifications are expected in their original energy spectra, anisotropy, and the chemical compositions due to interactions with low energy photons and matter. This means that any consistent interpretation of the nature and origin of UHECRs has to include the detailed knowledge of their propagation in a three-dimensional environment, taking into account the magnetic deflections and energy losses. The parameter space range for the magnetic fields in the universe is very large because the field strength and especially their orientation have big uncertainties. Particularly, the strength and morphology of the Extragalactic Magnetic Fields (EGMFs) remain largely unknown, because of the intrinsic difficulty of observing them. Monte Carlo simulations of charged particles traveling through a simulated magnetized universe is the straightforward way to study the influence of extragalactic magnetic fields on UHECRs propagation. However, this brings two major difficulties: an accurate numerical modeling of charged particles diffusion in magnetic fields, and an accurate numerical modeling of the magnetized Universe. Since magnetic fields do not cause energy losses, it is important to impose that the particle tracking method conserve the particle’s total energy and that the energy changes are results of the interactions with background photons only. Hence, special attention should be paid to computational effects. Additionally, because of the number of particles necessary to obtain a relevant statistical sample, the particle tracking method must be computationally efficient. In this work, we present an analysis of the propagation of ultra-high energy charged particles in the intergalactic medium. The EGMFs are considered to be coherent within cells of 1 Mpc (mega parsec) diameter, wherein they have uniform intensities of 1 nG (nano Gauss). Moreover, each cell has its field orientation randomly chosen, and a border region is defined such that at distances beyond 95% of the cell radius from the cell center smooth transitions have been applied in order to avoid discontinuities. The smooth transitions are simulated by weighting the magnetic field orientation by the particle's distance to the two nearby cells. The energy losses have been treated in the continuous approximation parameterizing the mean energy loss per unit path length by the energy loss length. We have shown, for a particle with the typical energy of interest the integration method performance in the relative error of Larmor radius, without energy losses and the relative error of energy. Additionally, we plotted the distance amplification from rectilinear propagation as a function of the traveled distance, particle's magnetic rigidity, without energy losses, and particle's energy, with energy losses, to study the influence of particle's species on these calculations. The results clearly show when it is necessary to use a full three-dimensional simulation.

Keywords: cosmic rays propagation, extragalactic magnetic fields, magnetic deflections, ultra-high energy

Procedia PDF Downloads 112
177 Rhizoremediation of Contaminated Soils in Sub-Saharan Africa: Experimental Insights of Microbe Growth and Effects of Paspalum Spp. for Degrading Hydrocarbons in Soils

Authors: David Adade-Boateng, Benard Fei Baffoe, Colin A. Booth, Michael A. Fullen

Abstract:

Remediation of diesel fuel, oil and grease in contaminated soils obtained from a mine site in Ghana are explored using rhizoremediation technology with different levels of nutrient amendments (i.e. N (nitrogen) in Compost (0.2, 0.5 and 0.8%), Urea (0.2, 0.5 and 0.8%) and Topsoil (0.2, 0.5 and 0.8%)) for a native species. A Ghanaian native grass species, Paspalum spp. from the Poaceae family, indicative across Sub-Saharan Africa, was selected following the development of essential and desirable growth criteria. Vegetative parts of the species were subjected to ten treatments in a Randomized Complete Block Design (RCBD) in three replicates. The plant-associated microbial community was examined in Paspalum spp. An assessment of the influence of Paspalum spp on the abundance and activity of micro-organisms in the rhizosphere revealed a build-up of microbial communities over a three month period. This was assessed using the MPN method, which showed rhizospheric samples from the treatments were significantly different (P <0.05). Multiple comparisons showed how microbial populations built-up in the rhizosphere for the different treatments. Treatments G (0.2% compost), H (0.5% compost) and I (0.8% compost) performed significantly better done other treatments, while treatments D (0.2% topsoil) and F (0.8% topsoil) were insignificant. Furthermore, treatment A (0.2% urea), B (0.5% urea), C (0.8% urea) and E (0.5% topsoil) also performed the same. Residual diesel and oil concentrations (as total petroleum hydrocarbons, TPH and oil and grease) were measured using infra-red spectroscopy and gravimetric methods, respectively. The presence of single species successfully enhanced the removal of hydrocarbons from soil. Paspalum spp. subjected to compost levels (0.5% and 0.8%) and topsoil levels (0.5% and 0.8%) showed significantly lower residual hydrocarbon concentrations compared to those treated with Urea. A strong relationship (p<0.001) between the abundance of hydrocarbon degrading micro-organisms in the rhizosphere and hydrocarbon biodegradation was demonstrated for rhizospheric samples with treatment G (0.2% compost), H (0.5% compost) and I (0.8% compost) (P <0.001). The same level of amendment with 0.8% compost (N-level) can improve the application effectiveness. These findings have wide-reaching implications for the environmental management of soils contaminated by hydrocarbons in Sub-Saharan Africa. However, it is necessary to further investigate the in situ rhizoremediation potential of Paspalum spp. at the field scale.

Keywords: rhizoremediation, microbial population, rhizospheric sample, treatments

Procedia PDF Downloads 292
176 Micelles Made of Pseudo-Proteins for Solubilization of Hydrophobic Biologicals

Authors: Sophio Kobauri, David Tugushi, Vladimir P. Torchilin, Ramaz Katsarava

Abstract:

Hydrophobic / hydrophilically modified functional polymers are of high interest in modern biomedicine due to their ability to solubilize water-insoluble / poorly soluble (hydrophobic) drugs. Among the many approaches that are being developed in this direction, one of the most effective methods is the use of polymeric micelles (PMs) (micelles formed by amphiphilic block-copolymers) for solubilization of hydrophobic biologicals. For therapeutic purposes, PMs are required to be stable and biodegradable, although quite a few amphiphilic block-copolymers are described capable of forming stable micelles with good solubilization properties. For obtaining micelle-forming block-copolymers, polyethylene glycol (PEG) derivatives are desirable to use as hydrophilic shell because it represents the most popular biocompatible hydrophilic block and various hydrophobic blocks (polymers) can be attached to it. Although the construction of the hydrophobic core, due to the complex requirements and micelles structure development, is the very actual and the main problem for nanobioengineers. Considering the above, our research goal was obtaining biodegradable micelles for the solubilization of hydrophobic drugs and biologicals. For this purpose, we used biodegradable polymers– pseudo-proteins (PPs)(synthesized with naturally occurring amino acids and other non-toxic building blocks, such as fatty diols and dicarboxylic acids) as hydrophobic core since these polymers showed reasonable biodegradation rates and excellent biocompatibility. In the present study, we used the hydrophobic amino acid – L-phenylalanine (MW 4000-8000Da) instead of L-leucine. Amino-PEG (MW 2000Da) was used as hydrophilic fragments for constructing the suitable micelles. The molecular weight of PP (the hydrophobic core of micelle) was regulated by variation of used monomers ratios. Micelles were obtained by dissolving of synthesized amphiphilic polymer in water. The micelle-forming property was tested using dynamic light scattering (Malvern zetasizer NanoZSZEN3600). The study showed that obtaining amphiphilic block-copolymer form stable neutral micelles 100 ± 7 nm in size at 10mg/mL concentration, which is considered as an optimal range for pharmaceutical micelles. The obtained preliminary data allow us to conclude that the obtained micelles are suitable for the delivery of poorly water-soluble drugs and biologicals.

Keywords: amino acid – L-phenylalanine, pseudo-proteins, amphiphilic block-copolymers, biodegradable micelles

Procedia PDF Downloads 124
175 Improving Rural Access to Specialist Emergency Mental Health Care: Using a Time and Motion Study in the Evaluation of a Telepsychiatry Program

Authors: Emily Saurman, David Lyle

Abstract:

In Australia, a well serviced rural town might have a psychiatrist visit once-a-month with more frequent visits from a psychiatric nurse, but many have no resident access to mental health specialists. Access to specialist care, would not only reduce patient distress and benefit outcomes, but facilitate the effective use of limited resources. The Mental Health Emergency Care-Rural Access Program (MHEC-RAP) was developed to improve access to specialist emergency mental health care in rural and remote communities using telehealth technologies. However, there has been no current benchmark to gauge program efficiency or capacity; to determine whether the program activity is justifiably sufficient. The evaluation of MHEC-RAP used multiple methods and applied a modified theory of access to assess the program and its aim of improved access to emergency mental health care. This was the first evaluation of a telepsychiatry service to include a time and motion study design examining program time expenditure, efficiency, and capacity. The time and motion study analysis was combined with an observational study of the program structure and function to assess the balance between program responsiveness and efficiency. Previous program studies have demonstrated that MHEC-RAP has improved access and is used and effective. The findings from the time and motion study suggest that MHEC-RAP has the capacity to manage increased activity within the current model structure without loss to responsiveness or efficiency in the provision of care. Enhancing program responsiveness and efficiency will also support a claim of the program’s value for money. MHEC-RAP is a practical telehealth solution for improving access to specialist emergency mental health care. The findings from this evaluation have already attracted the attention of other regions in Australia interested in implementing emergency telepsychiatry programs and are now informing the progressive establishment of mental health resource centres in rural New South Wales. Like MHEC-RAP, these centres will provide rapid, safe, and contextually relevant assessments and advice to support local health professionals to manage mental health emergencies in the smaller rural emergency departments. Sharing the application of this methodology and research activity may help to improve access to and future evaluations of telehealth and telepsychiatry services for others around the globe.

Keywords: access, emergency, mental health, rural, time and motion

Procedia PDF Downloads 216
174 Roboweeder: A Robotic Weeds Killer Using Electromagnetic Waves

Authors: Yahoel Van Essen, Gordon Ho, Brett Russell, Hans-Georg Worms, Xiao Lin Long, Edward David Cooper, Avner Bachar

Abstract:

Weeds reduce farm and forest productivity, invade crops, smother pastures and some can harm livestock. Farmers need to spend a significant amount of money to control weeds by means of biological, chemical, cultural, and physical methods. To solve the global agricultural labor shortage and remove poisonous chemicals, a fully autonomous, eco-friendly, and sustainable weeding technology is developed. This takes the form of a weeding robot, ‘Roboweeder’. Roboweeder includes a four-wheel-drive self-driving vehicle, a 4-DOF robotic arm which is mounted on top of the vehicle, an electromagnetic wave generator (magnetron) which is mounted on the “wrist” of the robotic arm, 48V battery packs, and a control/communication system. Cameras are mounted on the front and two sides of the vehicle. Using image processing and recognition, distinguish types of weeds are detected before being eliminated. The electromagnetic wave technology is applied to heat the individual weeds and clusters dielectrically causing them to wilt and die. The 4-DOF robotic arm was modeled mathematically based on its structure/mechanics, each joint’s load, brushless DC motor and worm gear’ characteristics, forward kinematics, and inverse kinematics. The Proportional-Integral-Differential control algorithm is used to control the robotic arm’s motion to ensure the waveguide aperture pointing to the detected weeds. GPS and machine vision are used to traverse the farm and avoid obstacles without the need of supervision. A Roboweeder prototype has been built. Multiple test trials show that Roboweeder is able to detect, point, and kill the pre-defined weeds successfully although further improvements are needed, such as reducing the “weeds killing” time and developing a new waveguide with a smaller waveguide aperture to avoid killing crops surrounded. This technology changes the tedious, time consuming and expensive weeding processes, and allows farmers to grow more, go organic, and eliminate operational headaches. A patent of this technology is pending.

Keywords: autonomous navigation, machine vision, precision heating, sustainable and eco-friendly

Procedia PDF Downloads 219
173 Water Monitoring Sentinel Cloud Platform: Water Monitoring Platform Based on Satellite Imagery and Modeling Data

Authors: Alberto Azevedo, Ricardo Martins, André B. Fortunato, Anabela Oliveira

Abstract:

Water is under severe threat today because of the rising population, increased agricultural and industrial needs, and the intensifying effects of climate change. Due to sea-level rise, erosion, and demographic pressure, the coastal regions are of significant concern to the scientific community. The Water Monitoring Sentinel Cloud platform (WORSICA) service is focused on providing new tools for monitoring water in coastal and inland areas, taking advantage of remote sensing, in situ and tidal modeling data. WORSICA is a service that can be used to determine the coastline, coastal inundation areas, and the limits of inland water bodies using remote sensing (satellite and Unmanned Aerial Vehicles - UAVs) and in situ data (from field surveys). It applies to various purposes, from determining flooded areas (from rainfall, storms, hurricanes, or tsunamis) to detecting large water leaks in major water distribution networks. This service was built on components developed in national and European projects, integrated to provide a one-stop-shop service for remote sensing information, integrating data from the Copernicus satellite and drone/unmanned aerial vehicles, validated by existing online in-situ data. Since WORSICA is operational using the European Open Science Cloud (EOSC) computational infrastructures, the service can be accessed via a web browser and is freely available to all European public research groups without additional costs. In addition, the private sector will be able to use the service, but some usage costs may be applied, depending on the type of computational resources needed by each application/user. Although the service has three main sub-services i) coastline detection; ii) inland water detection; iii) water leak detection in irrigation networks, in the present study, an application of the service to Óbidos lagoon in Portugal is shown, where the user can monitor the evolution of the lagoon inlet and estimate the topography of the intertidal areas without any additional costs. The service has several distinct methodologies implemented based on the computations of the water indexes (e.g., NDWI, MNDWI, AWEI, and AWEIsh) retrieved from the satellite image processing. In conjunction with the tidal data obtained from the FES model, the system can estimate a coastline with the corresponding level or even topography of the inter-tidal areas based on the Flood2Topo methodology. The outcomes of the WORSICA service can be helpful for several intervention areas such as i) emergency by providing fast access to inundated areas to support emergency rescue operations; ii) support of management decisions on hydraulic infrastructures operation to minimize damage downstream; iii) climate change mitigation by minimizing water losses and reduce water mains operation costs; iv) early detection of water leakages in difficult-to-access water irrigation networks, promoting their fast repair.

Keywords: remote sensing, coastline detection, water detection, satellite data, sentinel, Copernicus, EOSC

Procedia PDF Downloads 108
172 Using The Flight Heritage From >150 Electric Propulsion Systems To Design The Next Generation Field Emission Electric Propulsion Thrusters

Authors: David Krejci, Tony Schönherr, Quirin Koch, Valentin Hugonnaud, Lou Grimaud, Alexander Reissner, Bernhard Seifert

Abstract:

In 2018 the NANO thruster became the first Field Emission Electric Propulsion (FEEP) system ever to be verified in space in an In-Orbit Demonstration mission conducted together with Fotec. Since then, 160 additional ENPULSION NANO propulsion systems have been deployed in orbit on 73 different spacecraft across multiple customers and missions. These missions included a variety of different satellite bus sizes ranging from 3U Cubesats to >100kg buses, and different orbits in Low Earth Orbit and Geostationary Earth orbit, providing an abundance of on orbit data for statistical analysis. This large-scale industrialization and flight heritage allows for a holistic way of gathering data from testing, integration and operational phases, deriving lessons learnt over a variety of different mission types, operator approaches, use cases and environments. Based on these lessons learnt a new generation of propulsion systems is developed, addressing key findings from the large NANO heritage and adding new capabilities, including increased resilience, thrust vector steering and increased power and thrust level. Some of these successor products have already been validated in orbit, including the MICRO R3 and the NANO AR3. While the MICRO R3 features increased power and thrust level, the NANO AR3 is a successor of the heritage NANO thruster with added thrust vectoring capability. 5 NANO AR3 have been launched to date on two different spacecraft. This work presents flight telemetry data of ENPULSION NANO systems and onorbit statistical data of the ENPULSION NANO as well as lessons learnt during onorbit operations, customer assembly, integration and testing support and ground test campaigns conducted at different facilities. We discuss how transfer of lessons learnt and operational improvement across independent missions across customers has been accomplished. Building on these learnings and exhaustive heritage, we present the design of the new generation of propulsion systems that increase the power and thrust level of FEEP systems to address larger spacecraft buses.

Keywords: FEEP, field emission electric propulsion, electric propulsion, flight heritage

Procedia PDF Downloads 67
171 Experimental Quantification of the Intra-Tow Resin Storage Evolution during RTM Injection

Authors: Mathieu Imbert, Sebastien Comas-Cardona, Emmanuelle Abisset-Chavanne, David Prono

Abstract:

Short cycle time Resin Transfer Molding (RTM) applications appear to be of great interest for the mass production of automotive or aeronautical lightweight structural parts. During the RTM process, the two components of a resin are mixed on-line and injected into the cavity of a mold where a fibrous preform has been placed. Injection and polymerization occur simultaneously in the preform inducing evolutions of temperature, degree of cure and viscosity that furthermore affect flow and curing. In order to adjust the processing conditions to reduce the cycle time, it is, therefore, essential to understand and quantify the physical mechanisms occurring in the part during injection. In a previous study, a dual-scale simulation tool has been developed to help determining the optimum injection parameters. This tool allows tracking finely the repartition of the resin and the evolution of its properties during reactive injections with on-line mixing. Tows and channels of the fibrous material are considered separately to deal with the consequences of the dual-scale morphology of the continuous fiber textiles. The simulation tool reproduces the unsaturated area at the flow front, generated by the tow/channel difference of permeability. Resin “storage” in the tows after saturation is also taken into account as it may significantly affect the repartition and evolution of the temperature, degree of cure and viscosity in the part during reactive injections. The aim of the current study is, thanks to experiments, to understand and quantify the “storage” evolution in the tows to adjust and validate the numerical tool. The presented study is based on four experimental repeats conducted on three different types of textiles: a unidirectional Non Crimp Fabric (NCF), a triaxial NCF and a satin weave. Model fluids, dyes and image analysis, are used to study quantitatively, the resin flow in the saturated area of the samples. Also, textiles characteristics affecting the resin “storage” evolution in the tows are analyzed. Finally, fully coupled on-line mixing reactive injections are conducted to validate the numerical model.

Keywords: experimental, on-line mixing, high-speed RTM process, dual-scale flow

Procedia PDF Downloads 154
170 Optimization of Maintenance of PV Module Arrays Based on Asset Management Strategies: Case of Study

Authors: L. Alejandro Cárdenas, Fernando Herrera, David Nova, Juan Ballesteros

Abstract:

This paper presents a methodology to optimize the maintenance of grid-connected photovoltaic systems, considering the cleaning and module replacement periods based on an asset management strategy. The methodology is based on the analysis of the energy production of the PV plant, the energy feed-in tariff, and the cost of cleaning and replacement of the PV modules, with the overall revenue received being the optimization variable. The methodology is evaluated as a case study of a 5.6 kWp solar PV plant located on the Bogotá campus of the Universidad Nacional de Colombia. The asset management strategy implemented consists of assessing the PV modules through visual inspection, energy performance analysis, pollution, and degradation. Within the visual inspection of the plant, the general condition of the modules and the structure is assessed, identifying dust deposition, visible fractures, and water accumulation on the bottom. The energy performance analysis is performed with the energy production reported by the monitoring systems and compared with the values estimated in the simulation. The pollution analysis is performed using the soiling rate due to dust accumulation, which can be modelled by a black box with an exponential function dependent on historical pollution values. The pollution rate is calculated with data collected from the energy generated during two years in a photovoltaic plant on the campus of the National University of Colombia. Additionally, the alternative of assessing the temperature degradation of the PV modules is evaluated by estimating the cell temperature with parameters such as ambient temperature and wind speed. The medium-term energy decrease of the PV modules is assessed with the asset management strategy by calculating the health index to determine the replacement period of the modules due to degradation. This study proposes a tool for decision making related to the maintenance of photovoltaic systems. The above, projecting the increase in the installation of solar photovoltaic systems in power systems associated with the commitments made in the Paris Agreement for the reduction of CO2 emissions. In the Colombian context, it is estimated that by 2030, 12% of the installed power capacity will be solar PV.

Keywords: asset management, PV module, optimization, maintenance

Procedia PDF Downloads 21
169 Structural Fluxionality of Luminescent Coordination Compounds with Lanthanide Ions

Authors: Juliana A. B. Silva, Caio H. T. L. Albuquerque, Leonardo L. dos Santos, Cristiane K. Oliveira, Ivani Malvestiti, Fernando Hallwass, Ricardo L. Longo

Abstract:

Complexes with lanthanide ions have been extensively studied due to their applications as luminescent, magnetic and catalytic materials as molecular or extended crystals, thin films, glasses, polymeric matrices, ionic liquids, and in solution. NMR chemical shift data in solution have been reported and suggest fluxional structures in a wide range of coordination compounds with rare earth ions. However, the fluxional mechanisms for these compounds are still not established. This structural fluxionality may affect the photophysical, catalytic and magnetic properties in solution. Thus, understanding the structural interconversion mechanisms may aid the design of coordination compounds with, for instance, improved (electro)luminescence, catalytic and magnetic behaviors. The [Eu(btfa)₃bipy] complex, where btfa= 4,4,4-trifluoro-1-phenyl-1,3-butanedionate and bipy= 2,2’-bipiridyl, has a well-defined X-ray crystallographic structure and preliminary 1H NMR data suggested a structural fluxionality. Thus, we have investigated a series of coordination compounds with lanthanide ions [Ln(btfa)₃L], where Ln = La, Eu, Gd or Yb and L= bipy or phen (phen=1,10-phenanthroline) using a combined theoretical-experimental approach. These complexes were synthesized and fully characterized, and detailed NMR measurements were obtained. They were also studied by quantum chemical computational methods (DFT-PBE0). The aim was to determine the relevant factors in the structure of these compounds that favor or not the fluxional behavior. Measurements of the 1H NMR signals at variable temperature in CD₂Cl₂ of the [Eu(btfa)₃L] complexes suggest that these compounds have a fluxional structure, because the crystal structure has non-equivalent btfa ligands that should lead to non-equivalent hydrogen atoms and thus to more signals in the NMR spectra than those obtained at room temperature, where all hydrogen atoms of the btfa ligands are equivalent, and phen ligand has an effective vertical symmetry plane. For the [Eu(btfa)₃bipy] complex, the broadening of the signals at –70°C provides a lower bound for the coalescence temperature, which indicates the energy barriers involved in the structural interconversion mechanisms are quite small. These barriers and, consequently, the coalescence temperature are dependent upon the radii of the lanthanide ion as well as to their paramagnetic effects. The PBE0 calculated structures are in very good agreement with the crystallographic data and, for the [Eu(btfa)₃bipy] complex, this method provided several distinct structures with almost the same energy. However, the energy barrier for structural interconversion via dissociative pathways were found to be quite high and could not explain the experimental observations. Whereas the pseudo-rotation pathways, involving the btfa and bipy ligands, have very small activation barriers, in excellent agreement with the NMR data. The results also showed an increase in the activation barrier along the lanthanide series due to the decrease of the ionic radii and consequent increase of the steric effects. TD-DFT calculations showed a dependence of the ligand donor state energy with different structures of the complex [Eu(btfa)₃phen], which can affect the energy transfer rates and the luminescence. The energy required to promote the structural fluxionality may also enhance the luminescence quenching in solution. These results can aid in the design of more luminescent compounds and more efficient devices.

Keywords: computational chemistry, lanthanide-based compounds, NMR, structural fluxionality

Procedia PDF Downloads 182
168 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 63
167 Mathematics as the Foundation for the STEM Disciplines: Different Pedagogical Strategies Addressed

Authors: Marion G. Ben-Jacob, David Wang

Abstract:

There is a mathematics requirement for entry level college and university students, especially those who plan to study STEM (Science, Technology, Engineering and Mathematics). Most of them take College Algebra, and to continue their studies, they need to succeed in this course. Different pedagogical strategies are employed to promote the success of our students. There is, of course, the Traditional Method of teaching- lecture, examples, problems for students to solve. The Emporium Model, another pedagogical approach, replaces traditional lectures with a learning resource center model featuring interactive software and on-demand personalized assistance. This presentation will compare these two methods of pedagogy and the study done with its results on this comparison. Math is the foundation for science, technology, and engineering. Its work is generally used in STEM to find patterns in data. These patterns can be used to test relationships, draw general conclusions about data, and model the real world. In STEM, solutions to problems are analyzed, reasoned, and interpreted using math abilities in a assortment of real-world scenarios. This presentation will examine specific examples of how math is used in the different STEM disciplines. Math becomes practical in science when it is used to model natural and artificial experiments to identify a problem and develop a solution for it. As we analyze data, we are using math to find the statistical correlation between the cause of an effect. Scientists who use math include the following: data scientists, scientists, biologists and geologists. Without math, most technology would not be possible. Math is the basis of binary, and without programming, you just have the hardware. Addition, subtraction, multiplication, and division is also used in almost every program written. Mathematical algorithms are inherent in software as well. Mechanical engineers analyze scientific data to design robots by applying math and using the software. Electrical engineers use math to help design and test electrical equipment. They also use math when creating computer simulations and designing new products. Chemical engineers often use mathematics in the lab. Advanced computer software is used to aid in their research and production processes to model theoretical synthesis techniques and properties of chemical compounds. Mathematics mastery is crucial for success in the STEM disciplines. Pedagogical research on formative strategies and necessary topics to be covered are essential.

Keywords: emporium model, mathematics, pedagogy, STEM

Procedia PDF Downloads 54
166 FEM and Experimental Modal Analysis of Computer Mount

Authors: Vishwajit Ghatge, David Looper

Abstract:

Over the last few decades, oilfield service rolling equipment has significantly increased in weight, primarily because of emissions regulations, which require larger/heavier engines, larger cooling systems, and emissions after-treatment systems, in some cases, etc. Larger engines cause more vibration and shock loads, leading to failure of electronics and control systems. If the vibrating frequency of the engine matches the system frequency, high resonance is observed on structural parts and mounts. One such existing automated control equipment system comprising wire rope mounts used for mounting computers was designed approximately 12 years ago. This includes the use of an industrial- grade computer to control the system operation. The original computer had a smaller, lighter enclosure. After a few years, a newer computer version was introduced, which was 10 lbm heavier. Some failures of internal computer parts have been documented for cases in which the old mounts were used. Because of the added weight, there is a possibility of having the two brackets impact each other under off-road conditions, which causes a high shock input to the computer parts. This added failure mode requires validating the existing mount design to suit the new heavy-weight computer. This paper discusses the modal finite element method (FEM) analysis and experimental modal analysis conducted to study the effects of vibration on the wire rope mounts and the computer. The existing mount was modelled in ANSYS software, and resultant mode shapes and frequencies were obtained. The experimental modal analysis was conducted, and actual frequency responses were observed and recorded. Results clearly revealed that at resonance frequency, the brackets were colliding and potentially causing damage to computer parts. To solve this issue, spring mounts of different stiffness were modeled in ANSYS software, and the resonant frequency was determined. Increasing the stiffness of the system increased the resonant frequency zone away from the frequency window at which the engine showed heavy vibrations or resonance. After multiple iterations in ANSYS software, the stiffness of the spring mount was finalized, which was again experimentally validated.

Keywords: experimental modal analysis, FEM Modal Analysis, frequency, modal analysis, resonance, vibration

Procedia PDF Downloads 305