Search results for: empirical cumulant-generating function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7361

Search results for: empirical cumulant-generating function

851 The Gezi Park Protests in the Columns

Authors: Süleyman Hakan Yilmaz, Yasemin Gülsen Yilmaz

Abstract:

The Gezi Park protests of 2013 have significantly changed the Turkish agenda and its effects have been felt historically. The protests, which rapidly spread throughout the country, were triggered by the proposal to recreate the Ottoman Army Barracks to function as a shopping mall on Gezi Park located in Istanbul’s Taksim neighbourhood despite the oppositions of several NGOs and when trees were cut in the park for this purpose. Once the news that construction vehicles entered the park on May 27 spread on social media, activists moved into the park to stop the demolition, against whom the police used disproportioned force. With this police intervention and the then prime-minister Tayyip Erdoğan's insistent statements about the construction plans, the protests turned into anti-government demonstrations, which then spread to the rest of the country, mainly in big cities like Ankara and Izmir. According to the Ministry of Internal Affairs’ June 23rd reports, 2.5 million people joined the demonstrations in 79 provinces, that is all of them, except for the provinces of Bayburt and Bingöl, while even more people shared their opinions via social networks. As a result of these events, 8 civilians and 2 security personnel lost their lives, namely police chief Mustafa Sarı, police officer Ahmet Küçükdağ, citizens Mehmet Ayvalıtaş, Abdullah Cömert, Ethem Sarısülük, Ali İsmail Korkmaz, Ahmet Atakan, Berkin Elvan, Burak Can Karamanoğlu, Mehmet İstif, and Elif Çermik, and 8163 more were injured. Besides being a turning point in Turkish history, the Gezi Park protests also had broad repercussions in both in Turkish and in global media, which focused on Turkey throughout the events. Our study conducts content analysis of three Turkish reporting newspapers with varying ideological standpoints, Hürriyet, Cumhuriyet ve Yeni Şafak, in order to reveal their basic approach to columns casting in context of the Gezi Park protests. Columns content relating to the Gezi protests were treated and analysed for this purpose. The aim of this study is to understand the social effects of the Gezi Park protests through media samples with varying political attitudes towards news casting.

Keywords: Gezi Park, media, news casting, columns

Procedia PDF Downloads 433
850 Simulation of Colombian Exchange Rate to Cover the Exchange Risk Using Financial Options Like Hedge Strategy

Authors: Natalia M. Acevedo, Luis M. Jimenez, Erick Lambis

Abstract:

Imperfections in the capital market are used to argue the relevance of the corporate risk management function. With corporate hedge, the value of the company is increased by reducing the volatility of the expected cash flow and making it possible to face a lower bankruptcy costs and financial difficulties, without sacrificing tax advantages for debt financing. With the propose to avoid exchange rate troubles over cash flows of Colombian exporting firms, this dissertation uses financial options, over exchange rate between Peso and Dollar, for realizing a financial hedge. In this study, a strategy of hedge is designed for an exporting company in Colombia with the objective of preventing fluctuations because, if the exchange rate down, the number of Colombian pesos that obtains the company by exports, is less than agreed. The exchange rate of Colombia is measured by the TRM (Representative Market Rate), representing the number of Colombian pesos for an American dollar. First, the TMR is modelled through the Geometric Brownian Motion, with this, the project price is simulated using Montecarlo simulations and finding the mean of TRM for three, six and twelve months. For financial hedging, currency options were used. The 6-month projection was covered with financial options on European-type currency with a strike price of $ 2,780.47 for each month; this value corresponds to the last value of the historical TRM. In the compensation of the options in each month, the price paid for the premium, calculated with the Black-Scholes method for currency options, was considered. Finally, with the modeling of prices and the Monte Carlo simulation, the effect of the exchange hedging with options on the exporting company was determined, this by means of the unit price estimate to which the dollars in the scenario without coverage were changed and scenario with coverage. After using the scenarios: is determinate that the TRM will have a bull trend and the exporting firm will be affected positively because they will get more pesos for each dollar. The results show that the financial options manage to reduce the exchange risk. The expected value with coverage is approximate to the expected value without coverage, but the 5% percentile with coverage is greater than without coverage. The foregoing indicates that in the worst scenarios the exporting companies will obtain better prices for the sale of the currencies if they cover.

Keywords: currency hedging, futures, geometric Brownian motion, options

Procedia PDF Downloads 131
849 Measuring Digital Literacy in the Chilean Workforce

Authors: Carolina Busco, Daniela Osses

Abstract:

The development of digital literacy has become a fundamental element that allows for citizen inclusion, access to quality jobs, and a labor market capable of responding to the digital economy. There are no methodological instruments available in Chile to measure the workforce’s digital literacy and improve national policies on this matter. Thus, the objective of this research is to develop a survey to measure digital literacy in a sample of 200 Chilean workers. Dimensions considered in the instrument are sociodemographics, access to infrastructure, digital education, digital skills, and the ability to use e-government services. To achieve the research objective of developing a digital literacy model of indicators and a research instrument for this purpose, along with an exploratory analysis of data using factor analysis, we used an empirical, quantitative-qualitative, exploratory, non-probabilistic, and cross-sectional research design. The research instrument is a survey created to measure variables that make up the conceptual map prepared from the bibliographic review. Before applying the survey, a pilot test was implemented, resulting in several adjustments to the phrasing of some items. A validation test was also applied using six experts, including their observations on the final instrument. The survey contained 49 items that were further divided into three sets of questions: sociodemographic data; a Likert scale of four values ranked according to the level of agreement; iii) multiple choice questions complementing the dimensions. Data collection occurred between January and March 2022. For the factor analysis, we used the answers to 12 items with the Likert scale. KMO showed a value of 0.626, indicating a medium level of correlation, whereas Bartlett’s test yielded a significance value of less than 0.05 and a Cronbach’s Alpha of 0.618. Taking all factor selection criteria into account, we decided to include and analyze four factors that together explain 53.48% of the accumulated variance. We identified the following factors: i) access to infrastructure and opportunities to develop digital skills at the workplace or educational establishment (15.57%), ii) ability to solve everyday problems using digital tools (14.89%), iii) online tools used to stay connected with others (11.94%), and iv) residential Internet access and speed (11%). Quantitative results were discussed within six focus groups using heterogenic selection criteria related to the most relevant variables identified in the statistical analysis: upper-class school students; middle-class university students; Ph.D. professors; low-income working women, elderly individuals, and a group of rural workers. The digital divide and its social and economic correlations are evident in the results of this research. In Chile, the items that explain the acquisition of digital tools focus on access to infrastructure, which ultimately puts the first filter on the development of digital skills. Therefore, as expressed in the literature review, the advance of these skills is radically different when sociodemographic variables are considered. This increases socioeconomic distances and exclusion criteria, putting those who do not have these skills at a disadvantage and forcing them to seek the assistance of others.

Keywords: digital literacy, digital society, workforce digitalization, digital skills

Procedia PDF Downloads 67
848 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand

Authors: Wen Liu, Errol Haarhoff, Lee Beattie

Abstract:

In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.

Keywords: urban intensification, sustainable development, plan making, governance and implementation

Procedia PDF Downloads 557
847 Development of a Framework for Assessment of Market Penetration of Oil Sands Energy Technologies in Mining Sector

Authors: Saeidreza Radpour, Md. Ahiduzzaman, Amit Kumar

Abstract:

Alberta’s mining sector consumed 871.3 PJ in 2012, which is 67.1% of the energy consumed in the industry sector and about 40% of all the energy consumed in the province of Alberta. Natural gas, petroleum products, and electricity supplied 55.9%, 20.8%, and 7.7%, respectively, of the total energy use in this sector. Oil sands mining and upgrading to crude oil make up most of the mining energy sector activities in Alberta. Crude oil is produced from the oil sands either by in situ methods or by the mining and extraction of bitumen from oil sands ore. In this research, the factors affecting oil sands production have been assessed and a framework has been developed for market penetration of new efficient technologies in this sector. Oil sands production amount is a complex function of many different factors, broadly categorized into technical, economic, political, and global clusters. The results of developed and implemented statistical analysis in this research show that the importance of key factors affecting on oil sands production in Alberta is ranked as: Global energy consumption (94% consistency), Global crude oil price (86% consistency), and Crude oil export (80% consistency). A framework for modeling oil sands energy technologies’ market penetration (OSETMP) has been developed to cover related technical, economic and environmental factors in this sector. It has been assumed that the impact of political and social constraints is reflected in the model by changes of global oil price or crude oil price in Canada. The market share of novel in situ mining technologies with low energy and water use are assessed and calculated in the market penetration framework include: 1) Partial upgrading, 2) Liquid addition to steam to enhance recovery (LASER), 3) Solvent-assisted process (SAP), also called solvent-cyclic steam-assisted gravity drainage (SC-SAGD), 4) Cyclic solvent, 5) Heated solvent, 6) Wedge well, 7) Enhanced modified steam and Gas push (emsagp), 8) Electro-thermal dynamic stripping process (ET-DSP), 9) Harris electro-magnetic heating applications (EMHA), 10) Paraffin froth separation. The results of the study will show the penetration profile of these technologies over a long term planning horizon.

Keywords: appliances efficiency improvement, diffusion models, market penetration, residential sector

Procedia PDF Downloads 330
846 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor

Authors: Hao Yan, Xiaobing Zhang

Abstract:

The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.

Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model

Procedia PDF Downloads 90
845 Material Supply Mechanisms for Contemporary Assembly Systems

Authors: Rajiv Kumar Srivastava

Abstract:

Manufacturing of complex products such as automobiles and computers requires a very large number of parts and sub-assemblies. The design of mechanisms for delivery of these materials to the point of assembly is an important manufacturing system and supply chain challenge. Different approaches to this problem have been evolved for assembly lines designed to make large volumes of standardized products. However, contemporary assembly systems are required to concurrently produce a variety of products using approaches such as mixed model production, and at times even mass customization. In this paper we examine the material supply approaches for variety production in moderate to large volumes. The conventional approach for material delivery to high volume assembly lines is to supply and stock materials line-side. However for certain materials, especially when the same or similar items are used along the line, it is more convenient to supply materials in kits. Kitting becomes more preferable when lines concurrently produce multiple products in mixed model mode, since space requirements could increase as product/ part variety increases. At times such kits may travel along with the product, while in some situations it may be better to have delivery and station-specific kits rather than product-based kits. Further, in some mass customization situations it may even be better to have a single delivery and assembly station, to which an entire kit is delivered for fitment, rather than a normal assembly line. Finally, in low-moderate volume assembly such as in engineered machinery, it may be logistically more economical to gather materials in an order-specific kit prior to launching final assembly. We have studied material supply mechanisms to support assembly systems as observed in case studies of firms with different combinations of volume and variety/ customization. It is found that the appropriate approach tends to be a hybrid between direct line supply and different kitting modes, with the best mix being a function of the manufacturing and supply chain environment, as well as space and handling considerations. In our continuing work we are studying these scenarios further, through the use of descriptive models and progressing towards prescriptive models to help achieve the optimal approach, capturing the trade-offs between inventory, material handling, space, and efficient line supply.

Keywords: assembly systems, kitting, material supply, variety production

Procedia PDF Downloads 226
844 Effectiveness of Centromedullary Fixation by Metaizeau Technique in Challenging Pediatric Fractures

Authors: Mohammad Arshad Ikram

Abstract:

We report three cases of challenging fractures in children treated by intramedullary fixation using the Metaizeau method and achieved anatomical reduction with excellent clinical results. Jean-Paul Metaizeau described the centromedullary fixation for the radial neck in 1980 using K-wires Radial neck fractures are uncommon in children. Treatment of severely displaced fractures is always challenging. Closed reduction techniques are more popular as compared to open reduction due to the low risk of complications. Metaizeau technique of closed reduction with centromedullary pinning is a commonly preferred method of treatment. We present two cases with a severely displaced radial neck fracture, treated by this method and achieved sound union; anatomical position of the radial head and full function were observed two months after surgery. Proximal humerus fractures are another uncommon injury in children accounting for less than 5% of all pediatric fractures. Most of these injuries occur through the growth plate because of its relative weakness. Salter-Harris type I is commonly seen in the younger age group, whereas type II & III occurs in older children and adolescents. In contrast to adults, traumatic glenohumeral dislocation is an infrequently observed condition among children. A combination of proximal humerus fracture and glenohumeral dislocation is extremely rare and occurs in less than 2% of the pediatric population. The management of this injury is always challenging. Treatment ranged from closed reduction with and without internal fixation and open reduction with internal fixation. The children who had closed reduction with centromedullary fixation by the Metaizeau method showed excellent results with the return of full movements at the shoulder in a short time without any complication. We present the case of a child with anterior dislocation of the shoulder associated with a complete displaced proximal humerus metaphyseal fracture. The fracture was managed by closed reduction and then fixation by two centromedullary K-wires using the Metaizeau method, achieving the anatomical reduction of the fracture and dislocation. This method of treatment enables us to achieve excellent radiological and clinical results in a short time.

Keywords: glenohumeral, Metaizeau method, pediatric fractures, radial neck

Procedia PDF Downloads 105
843 Analysis of the Operating Load of Gas Bearings in the Gas Generator of the Turbine Engine during a Deceleration to Dash Maneuver

Authors: Zbigniew Czyz, Pawel Magryta, Mateusz Paszko

Abstract:

The paper discusses the status of loads acting on the drive unit of the unmanned helicopter during deceleration to dash maneuver. Special attention was given for the loads of bearings in the gas generator turbine engine, in which will be equipped a helicopter. The analysis was based on the speed changes as a function of time for manned flight of helicopter PZL W3-Falcon. The dependence of speed change during the flight was approximated by the least squares method and then determined for its changes in acceleration. This enabled us to specify the forces acting on the bearing of the gas generator in static and dynamic conditions. Deceleration to dash maneuvers occurs in steady flight at a speed of 222 km/h by horizontal braking and acceleration. When the speed reaches 92 km/h, it dynamically changes an inclination of the helicopter to the maximum acceleration and power to almost maximum and holds it until it reaches its initial speed. This type of maneuvers are used due to ineffective shots at significant cruising speeds. It is, therefore, important to reduce speed to the optimum as soon as possible and after giving a shot to return to the initial speed (cruising). In deceleration to dash maneuvers, we have to deal with the force of gravity of the rotor assembly, gas aerodynamics forces and the forces caused by axial acceleration during this maneuver. While we can assume that the working components of the gas generator are designed so that axial gas forces they create could balance the aerodynamic effects, the remaining ones operate with a value that results from the motion profile of the aircraft. Based on the analysis, we can make a compilation of the results. For this maneuver, the force of gravity (referring to statistical calculations) respectively equals for bearing A = 5.638 N and bearing B = 1.631 N. As overload coefficient k in this direction is 1, this force results solely from the weight of the rotor assembly. For this maneuver, the acceleration in the longitudinal direction achieved value a_max = 4.36 m/s2. Overload coefficient k is, therefore, 0.44. When we multiply overload coefficient k by the weight of all gas generator components that act on the axial bearing, the force caused by axial acceleration during deceleration to dash maneuver equals only 3.15 N. The results of the calculations are compared with other maneuvers such as acceleration and deceleration and jump up and jump down maneuvers. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: gas bearings, helicopters, helicopter maneuvers, turbine engines

Procedia PDF Downloads 340
842 Development of the Squamate Egg Tooth on the Basis of Grass Snake Natrix natrix Studies

Authors: Mateusz Hermyt, Pawel Kaczmarek, Weronika Rupik

Abstract:

The egg tooth is a crucial structure during hatching of lizards and snakes. In contrast to birds, turtles, crocodiles, and monotremes, egg tooth of squamate reptiles is a true tooth sharing common features of structure and development with all the other teeth of vertebrates. The egg tooth; however, due to its function, exhibits structural differences in relation to regular teeth. External morphology seems to be important in the context of phylogenetic relationships within Squamata but up to date, there is scarce information concerning structure and development of the egg tooth at the submicroscopical level. In presented studies detailed analysis of the egg tooth development in grass snake has been performed with the usage of light (including fluorescent), transmission and scanning electron microscopy. Grass snake embryo’s heads have been used in our studies. Grass snake is common snake species occurring in most of Europe including Poland. The grass snake is characterized by the presence of single unpaired egg tooth (as in most squamates) in contrast to geckos and dibamids possessing paired egg teeth. Studies show changes occurring on the external morphology, tissue and cellular levels of differentiating egg tooth. The egg tooth during its development changes its curvature. Initially, faces directly downward and in the course of its differentiation, it gradually changes to rostro-ventral orientation. Additionally, it forms conical dentinal protrusions on the sides. Histological analysis showed that egg tooth development occurs in similar steps in relation to regular teeth. It undergoes initiation, bud, cap and bell morphological stages. Analyses focused on describing morphological changes in hard tissues (mainly dentin and predentin) of egg tooth and in cells which enamel organ consists of. It included: outer enamel epithelium, stratum intermedium, inner enamel epithelium, odontoblasts, and cells of dental pulp. All specimens used in the study were captured according to the Polish regulations concerning the protection of wild species. Permission was granted by the Local Ethics Commission in Katowice (41/2010; 87/2015) and the Regional Directorate for Environmental Protection in Katowice (WPN.6401.257.2015.DC).

Keywords: hatching, organogenesis, reptile, Squamata

Procedia PDF Downloads 180
841 Developing a Systemic Monoclonal Antibody Therapy for the Treatment of Large Burn Injuries

Authors: Alireza Hassanshahi, Xanthe Strudwick, Zlatko Kopecki, Allison J Cowin

Abstract:

Studies have shown that Flightless (Flii) is elevated in human wounds, including burns, and reducing the level of Flii is a promising approach for improving wound repair and reducing scar formation. The most effective approach has been to neutralise Flii activity using localized, intradermal application of function blocking monoclonal antibodies. However, large surface area burns are difficult to treat by intradermal injection of therapeutics, so the aim of this study was to investigate if a systemic injection of a monoclonal antibody against Flii could improve healing in mice following burn injury. Flii neutralizing antibodies (FnAbs) were labelled with Alxa-Fluor-680 for biodistribution studies and the healing effects of systemically administered FnAbs to mice with burn injuries. A partial thickness, 7% (70mm2) total body surface area scald burn injury was created on the dorsal surface of mice (n=10/group), and 100µL of Alexa-Flour-680-labeled FnAbs were injected into the intraperitoneal cavity (IP) at time of injury. The burns were imaged on days 0, 1, 2, 3, 4, and 7 using IVIS Lumina S5 Imaging System, and healing was assessed macroscopically, histologically, and using immunohistochemistry. Fluorescent radiance efficiency measurements showed that IP injected Alexa-Fluor-680-FnAbs localized at the site of burn injury from day 1, remaining there for the whole 7-day study. The burns treated with FnAbs showed a reduction in macroscopic wound area and an increased rate of epithelialization compared to controls. Immunohistochemistry for NIMP-R14 showed a reduction in the inflammatory infiltrate, while CD31/VEGF staining showed improved angiogenesis post-systemic FnAb treatment. These results suggest that systemically administered FnAbs are active within the burn site and can improve healing outcomes. The clinical application of systemically injected Flii monoclonal antibodies could therefore be a potential approach for promoting the healing of large surface area burns immediately after injury.

Keywords: biodistribution, burn, flightless, systemic, fnAbs

Procedia PDF Downloads 173
840 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing

Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl

Abstract:

This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.

Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization

Procedia PDF Downloads 159
839 Optimization of Metal Pile Foundations for Solar Power Stations Using Cone Penetration Test Data

Authors: Adrian Priceputu, Elena Mihaela Stan

Abstract:

Our research addresses a critical challenge in renewable energy: improving efficiency and reducing the costs associated with the installation of ground-mounted photovoltaic (PV) panels. The most commonly used foundation solution is metal piles - with various sections adapted to soil conditions and the structural model of the panels. However, direct foundation systems are also sometimes used, especially in brownfield sites. Although metal micropiles are generally the first design option, understanding and predicting their bearing capacity, particularly under varied soil conditions, remains an open research topic. CPT Method and Current Challenges: Metal piles are favored for PV panel foundations due to their adaptability, but existing design methods rely heavily on costly and time-consuming in situ tests. The Cone Penetration Test (CPT) offers a more efficient alternative by providing valuable data on soil strength, stratification, and other key characteristics with reduced resources. During the test, a cone-shaped probe is pushed into the ground at a constant rate. Sensors within the probe measure the resistance of the soil to penetration, divided into cone penetration resistance and shaft friction resistance. Despite some existing CPT-based design approaches for metal piles, these methods are often cumbersome and difficult to apply. They vary significantly due to soil type and foundation method, and traditional approaches like the LCPC method involve complex calculations and extensive empirical data. The method was developed by testing 197 piles on a wide range of ground conditions, but the tested piles were very different from the ones used for PV pile foundations, making the method less accurate and practical for steel micropiles. Project Objectives and Methodology: Our research aims to develop a calculation method for metal micropile foundations using CPT data, simplifying the complex relationships involved. The goal is to estimate the pullout bearing capacity of piles without additional laboratory tests, streamlining the design process. To achieve this, a case study was selected which will serve for the development of an 80ha solar power station. Four testing locations were chosen spread throughout the site. At each location, two types of steel profiles (H160 and C100) were embedded into the ground at various depths (1.5m and 2.0m). The piles were tested for pullout capacity under natural and inundated soil conditions. CPT tests conducted nearby served as calibration points. The results served for the development of a preliminary equation for estimating pullout capacity. Future Work: The next phase involves validating and refining the proposed equation on additional sites by comparing CPT-based forecasts with in situ pullout tests. This validation will enhance the accuracy and reliability of the method, potentially transforming the foundation design process for PV panels.

Keywords: cone penetration test, foundation optimization, solar power stations, steel pile foundations

Procedia PDF Downloads 54
838 Escaping Domestic Violence in Time of Conflict: The Ways Female Refugees Decide to Flee

Authors: Zofia Wlodarczyk

Abstract:

I study the experiences of domestic violence survivors who flee their countries of origin in times of political conflict using insight and evidence from forty-five biographical interviews with female Chechen refugees and twelve refugee resettlement professionals in Poland. Both refugees and women are often described as having less agency—that is, they lack the power to decide to migrate – refugees less than economic migrants and women less than men. In this paper, I focus on how female refugees who have been victims of domestic violence make decisions about leaving their countries of origin during times of political conflict. I use several existing migration theories to trace how the migration experience of these women is shaped by dynamics at different levels of society: the macro level, the meso level and the micro level. At the macro level of analysis, I find that political conflict can be both a source of and an escape from domestic violence. Ongoing conflict can strengthen the patriarchal cultural norms, increase violence and constrain women’s choices when it comes to marriage. However, political conflict can also destabilize families and make pathways for women to escape. At the meso level I demonstrate that other political migrants and institutions that emerge due to politically triggered migration can guide those fleeing domestic violence. Finally, at the micro level, I show that family dynamics often force domestic abuse survivors to make their decision to escape alone or with the support of only the most trusted female relatives. Taken together, my analyses show that we cannot look solely at one level of society when describing decision-making processes of women fleeing domestic violence. Conflict-related micro, meso and macro forces interact with and influence each other: on the one hand, strengthening an abusive trap, and on the other hand, opening a door to escape. This study builds upon several theoretical and empirical debates. First, it expands theories of migration by incorporating both refugee and gender perspectives. Few social scientists have used the migration theory framework to discuss the unique circumstances of refugee flows. Those who have mainly focus on “political” migrants, a designation that frequently fails to account for gender, does not incorporate individuals fleeing gender-based violence, including domestic-violence victims. The study also enriches migration scholarship, typically focused on the US and Western-European context, with research from Eastern Europe and Caucasus. Moreover, it contributes to the literature on the changing roles of gender in the context of migration. I argue that understanding how gender roles and hierarchies influence the pre-migration stage of female refugees is crucial, as it may have implications for policy-making efforts in host countries that recognize the asylum claims of those fleeing domestic violence. This study also engages in debates about asylum and refugee law. Domestic violence is normatively and often legally considered an individual-level problem whereas political persecution is recognized as a structural or societal level issue. My study challenges these notions by showing how the migration triggered by domestic violence is closely intertwined with politically motivated refuge.

Keywords: AGENCY, DOMESTIC VIOLENCE, FEMALE REFUGEES, POLITICAL REFUGE, SOCIAL NETWORKS

Procedia PDF Downloads 169
837 Owning (up to) the 'Art of the Insane': Re-Claiming Personhood through Copyright Law

Authors: Mathilde Pavis

Abstract:

From Schumann to Van Gogh, Frida Kahlo, and Ray Charles, the stories narrating the careers of artists with physical or mental disabilities are becoming increasingly popular. From the emergence of ‘pathography’ at the end of 18th century to cinematographic portrayals, the work and lives of differently-abled creative individuals continue to fascinate readers, spectators and researchers. The achievements of those artists form the tip of the iceberg composed of complex politico-cultural movements which continue to advocate for wider recognition of disabled artists’ contribution to western culture. This paper envisages copyright law as a potential tool to such end. It investigates the array of rights available to artists with intellectual disabilities to assert their position as authors of their artwork in the twenty-first-century looking at international and national copyright laws (UK and US). Put simply, this paper questions whether an artist’s intellectual disability could be a barrier to assert their intellectual property rights over their creation. From a legal perspective, basic principles of non-discrimination would contradict the representation of artists’ disability as an obstacle to authorship as granted by intellectual property laws. Yet empirical studies reveal that artists with intellectual disabilities are often denied the opportunity to exercise their intellectual property rights or any form of agency over their work. In practice, it appears that, unlike other non-disabled artists, the prospect for differently-abled creators to make use of their right is contingent to the context in which the creative process takes place. Often will the management of such rights rest with the institution, art therapist or mediator involved in the artists’ work as the latter will have necessitated greater support than their non-disabled peers for a variety of reasons, either medical or practical. Moreover, the financial setbacks suffered by medical institutions and private therapy practices have renewed administrators’ and physicians’ interest in monetising the artworks produced under their supervision. Adding to those economic incentives, the rise of criminal and civil litigation in psychiatric cases has also encouraged the retention of patients’ work by therapists who feel compelled to keep comprehensive medical records to shield themselves from liability in the event of a lawsuit. Unspoken transactions, contracts, implied agreements and consent forms have thus progressively made their way into the relationship between those artists and their therapists or assistants, disregarding any notions of copyright. The question of artists’ authorship finds itself caught in an unusually multi-faceted web of issues formed by tightening purse strings, ethical concerns and the fear of civil or criminal liability. Whilst those issues are playing out behind closed doors, the popularity of what was once called the ‘Art of the Insane’ continues to grow and open new commercial avenues. This socio-economic context exacerbates the need to devise a legal framework able to help practitioners, artists and their advocates navigate through those issues in such a way that neither this minority nor our cultural heritage suffers from the fragmentation of the legal protection available to them.

Keywords: authorship, copyright law, intellectual disabilities, art therapy and mediation

Procedia PDF Downloads 150
836 Limbic Involvement in Visual Processing

Authors: Deborah Zelinsky

Abstract:

The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.

Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing

Procedia PDF Downloads 85
835 Enhancing Academic and Social Skills of Elementary School Students with Autism Spectrum Disorder by an Intensive and Comprehensive Teaching Program

Authors: Piyawan Srisuruk, Janya Boonmeeprasert, Romwarin Gamlunglert, Benjamaporn Choikhruea, Ornjira Jaraepram, Jarin Boonsuchat, Sakdadech Singkibud, Kusalaporn Chaiudomsom, Chanatiporn Chonprai, Pornchanaka Tana, Suchat Paholpak

Abstract:

Objective: To develop an Intensive and comprehensive program (ICP) for the Inclusive Class Teacher (ICPICT) to teach elementary students (ES) with ASD in order to enhance the students’ academic and social skills (ASS) and to study the effect of the teaching program. Methods: The purposive sample included 15 Khon Kaen inclusive class teachers and their 15 elementary students. All the students were diagnosed by a child and adolescent psychiatrist to have DSM-5 level 1 ASD. The study tools included 1) an ICP to teach teachers about ASD, a teaching method to enhance academic and social skills for ES with ASD, and an assessment tool to assess the teacher’s knowledge before and after the ICP. 2) an ICPICT to teach ES with ASD to enhance their ASS. The project taught 10 sessions, 3 hours each. The ICPICT had its teaching structure. Teaching media included: pictures, storytelling, songs, and plays. The authors taught and demonstrated to the participant teachers how to teach with the ICPICT until the participants could display the correct teaching method. Then the teachers taught ICPICT at school by themselves 3) an assessment tool to assess the students’ ASS before and after the completion of the study. The ICP to teach the teachers, the ICPICT, and the relevant assessment tools were developed by the authors and were adjusted until consensus agreed as appropriate for researching by 3 curriculum of teaching children with ASD experts. The data were analyzed by descriptive and analytic statistics via SPSS version 26. Results: After the briefing, the teachers increased the mean score, though not with statistical significance, of knowledge of ASD and how to teach ES with ASD on ASS (p = 0.13). Teaching ES with ASD with the ICPICT could increase the mean scores of the students’ skills in learning and expressing social emotions, relationships with a friend, transitioning, and skills in academic function 3.33, 2.27, 2.94, and 3.00 scores (full scores were 18, 12, 15 and 12, Paired T-Test p = 0.007, 0.013, 0.028 and 0.003 respectively). Conclusion: The program to teach academic and social skills simultaneously in an intensive and comprehensive structure could enhance both the academic and social skills of elementary students with ASD. Keywords: Elementary students, autism spectrum, academic skill, social skills, intensive program, comprehensive program, integration.

Keywords: academica and social skills, students with autism, intensive and comprehensive, teaching program

Procedia PDF Downloads 64
834 Rare DCDC2 Mutation Causing Renal-Hepatic Ciliopathy

Authors: Atitallah Sofien, Bouyahia Olfa, Attar Souleima, Missaoui Nada, Ben Rabeh Rania, Yahyaoui Salem, Mazigh Sonia, Boukthir Samir

Abstract:

Introduction: Ciliopathies are a spectrum of diseases that have in common a defect in the synthesis of ciliary proteins. It is a rare cause of neonatal cholestasis. Clinical presentation varies extremely, and the main affected organs are the kidneys, liver, and pancreas. Methodology: This is a descriptive case report of a newborn who was admitted for exploration of neonatal cholestasis in the Paediatric Department C at the Children’s Hospital of Tunis, where the investigations concluded with a rare genetic mutation. Results: This is the case of a newborn male with no family history of hepatic and renal diseases, born to consanguineous parents, and from a well-monitored uneventful pregnancy. He developed jaundice on the second day of life, for which he received conventional phototherapy in the neonatal intensive care unit. He was admitted at 15 days for mild bronchiolitis. On clinical examination, intense jaundice was noted with normal stool and urine colour. Initial blood work showed an elevation in conjugated bilirubin and a high gamma-glutamyl transferase level. Transaminases and prothrombin time were normal. Abdominal sonography revealed hepatomegaly, splenomegaly, and undifferentiated renal cortex with bilateral medullar micro-cysts. Kidney function tests were normal. The infant received ursodeoxycholic acid and vitamin therapy. Genetic testing showed a homozygous mutation in the DCDC2 gene that hadn’t been documented before confirming the diagnosis of renal-hepatic ciliopathy. The patient has regular follow-ups, and his conjugated bilirubin and gamma-glutamyl transferase levels have been decreasing. Conclusion: Genetic testing has revolutionized the approach to etiological diagnosis in pediatric cholestasis. It enables personalised treatment strategies to better enhance the quality of life of patients and prevent potential complications following adequate long-term monitoring.

Keywords: cholestasis, newborn, ciliopathy, DCDC2, genetic

Procedia PDF Downloads 63
833 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams

Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche

Abstract:

According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.

Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor

Procedia PDF Downloads 318
832 Braille Code Matrix

Authors: Mohammed E. A. Brixi Nigassa, Nassima Labdelli, Ahmed Slami, Arnaud Pothier, Sofiane Soulimane

Abstract:

According to the world health organization (WHO), there are almost 285 million people with visual disability, 39 million of these people are blind. Nevertheless, there is a code for these people that make their life easier and allow them to access information more easily; this code is the Braille code. There are several commercial devices allowing braille reading, unfortunately, most of these devices are not ergonomic and too expensive. Moreover, we know that 90 % of blind people in the world live in low-incomes countries. Our contribution aim is to concept an original microactuator for Braille reading, as well as being ergonomic, inexpensive and lowest possible energy consumption. Nowadays, the piezoelectric device gives the better actuation for low actuation voltage. In this study, we focus on piezoelectric (PZT) material which can bring together all these conditions. Here, we propose to use one matrix composed by six actuators to form the 63 basic combinations of the Braille code that contain letters, numbers, and special characters in compliance with the standards of the braille code. In this work, we use a finite element model with Comsol Multiphysics software for designing and modeling this type of miniature actuator in order to integrate it into a test device. To define the geometry and the design of our actuator, we used physiological limits of perception of human being. Our results demonstrate in our study that piezoelectric actuator could bring a large deflection out-of-plain. Also, we show that microactuators can exhibit non uniform compression. This deformation depends on thin film thickness and the design of membrane arm. The actuator composed of four arms gives the higher deflexion and it always gives a domed deformation at the center of the deviceas in case of the Braille system. The maximal deflection can be estimated around ten micron per Volt (~ 10µm/V). We noticed that the deflection according to the voltage is a linear function, and this deflection not depends only on the voltage the voltage, but also depends on the thickness of the film used and the design of the anchoring arm. Then, we were able to simulate the behavior of the entire matrix and thus display different characters in Braille code. We used these simulations results to achieve our demonstrator. This demonstrator is composed of a layer of PDMS on which we put our piezoelectric material, and then added another layer of PDMS to isolate our actuator. In this contribution, we compare our results to optimize the final demonstrator.

Keywords: Braille code, comsol software, microactuators, piezoelectric

Procedia PDF Downloads 355
831 Urban Logistics Dynamics: A User-Centric Approach to Traffic Modelling and Kinetic Parameter Analysis

Authors: Emilienne Lardy, Eric Ballot, Mariam Lafkihi

Abstract:

Efficient urban logistics requires a comprehensive understanding of traffic dynamics, particularly as it pertains to kinetic parameters influencing energy consumption and trip duration estimations. While real-time traffic information is increasingly accessible, current high-precision forecasting services embedded in route planning often function as opaque 'black boxes' for users. These services, typically relying on AI-processed counting data, fall short in accommodating open design parameters essential for management studies, notably within Supply Chain Management. This work revisits the modelling of traffic conditions in the context of city logistics, emphasizing its significance from the user’s point of view, with two focuses. Firstly, the focus is not on the vehicle flow but on the vehicles themselves and the impact of the traffic conditions on their driving behaviour. This means opening the range of studied indicators beyond vehicle speed, to describe extensively the kinetic and dynamic aspects of the driving behaviour. To achieve this, we leverage the Art. Kinema parameters are designed to characterize driving cycles. Secondly, this study examines how the driving context (i.e., exogenous factors to the traffic flow) determines the mentioned driving behaviour. Specifically, we explore how accurately the kinetic behaviour of a vehicle can be predicted based on a limited set of exogenous factors, such as time, day, road type, orientation, slope, and weather conditions. To answer this question, statistical analysis was conducted on real-world driving data, which includes high-frequency measurements of vehicle speed. A Factor Analysis and a Generalized Linear Model have been established to link kinetic parameters with independent categorical contextual variables. The results include an assessment of the adjustment quality and the robustness of the models, as well as an overview of the model’s outputs.

Keywords: factor analysis, generalised linear model, real world driving data, traffic congestion, urban logistics, vehicle kinematics

Procedia PDF Downloads 67
830 Selenuranes as Cysteine Protease Inhibitors: Theorical Investigation on Model Systems

Authors: Gabriela D. Silva, Rodrigo L. O. R. Cunha, Mauricio D. Coutinho-Neto

Abstract:

In the last four decades the biological activities of selenium compounds has received great attention, particularly for hypervalent derivates from selenium (IV) used as enzyme inhibitors. The unregulated activity of cysteine proteases are related to the development of several pathologies, such as neurological disorders, cardiovascular diseases, obesity, rheumatoid arthritis, cancer and parasitic infections. These enzymes are therefore a valuable target for designing new small molecule inhibitors such as selenuranes. Even tough there has been advances in the synthesis and design of new selenuranes based inhibitors, little is known about their mechanism of action. It is a given that inhibition occurs through the reaction between the thiol group of the enzyme and the chalcogen atom. However, several open questions remain about the nature of the mechanism (associative vs. dissociative) and about the nature of the reactive species in solution under physiological conditions. In this work we performed a theoretical investigation on model systems to study the possible routes of substitution reactions. Nucleophiles may be present in biological systems, our interest is centered in the thiol groups from the cysteine proteases and the hydroxyls from the aqueous environment. We therefore expect this study to clarify the possibility of a route reaction in two stages, the first consisting of the substitution of chloro atoms by hydroxyl groups and then replacing these hydroxyl groups per thiol groups in selenuranes. The structures of selenuranes and nucleophiles were optimized using density function theory along the B3LYP functional and a 6-311+G(d) basis set. Solvent was treated using the IEFPCM method as implemented in the Gaussian 09 code. Our results indicate that hydrolysis from water react preferably with selenuranes, and then, they are replaced by the thiol group. It show the energy values of -106,0730423 kcal/mol for dople substituition by hydroxyl group and 96,63078511 kcal/mol for thiol group. The solvatation and pH reduction promotes this route, increasing the energy value for reaction with hydroxil group to -50,75637672 kcal/mol and decreasing the energy value for thiol to 7,917767189 kcal/mol. Alternative ways were analyzed for monosubstitution (considering the competition between Cl, OH and SH groups) and they suggest the same route. Similar results were obtained for aliphatic and aromatic selenuranes studied.

Keywords: chalcogenes, computational study, cysteine proteases, enzyme inhibitors

Procedia PDF Downloads 304
829 Inappropriate Prescribing Defined by START and STOPP Criteria and Its Association with Adverse Drug Events among Older Hospitalized Patients

Authors: Mohd Taufiq bin Azmy, Yahaya Hassan, Shubashini Gnanasan, Loganathan Fahrni

Abstract:

Inappropriate prescribing in older patients has been associated with resource utilization and adverse drug events (ADE) such as hospitalization, morbidity and mortality. Globally, there is a lack of published data on ADE induced by inappropriate prescribing. Our study is specific to an older population and is aimed at identifying risk factors for ADE and to develop a model that will link ADE to inappropriate prescribing. The design of the study was prospective whereby computerized medical records of 302 hospitalized elderly aged 65 years and above in 3 public hospitals in Malaysia (Hospital Serdang, Hospital Selayang and Hospital Sungai Buloh) were studied over a 7 month period from September 2013 until March 2014. Potentially inappropriate medications and potential prescribing omissions were determined using the published and validated START-STOPP criteria. Patients who had at least one inappropriate medication were included in Phase II of the study where ADE were identified by local expert consensus panel based on the published and validated Naranjo ADR probability scale. The panel also assessed whether ADE were causal or contributory to current hospitalization. The association between inappropriate prescribing and ADE (hospitalization, mortality and adverse drug reactions) was determined by identifying whether or not the former was causal or contributory to the latter. Rate of ADE avoidability was also determined. Our findings revealed that the prevalence of potential inappropriate prescribing was 58.6%. A total of ADEs were detected in 31 of 105 patients (29.5%) when STOPP criteria were used to identify potentially inappropriate medication; All of the 31 ADE (100%) were considered causal or contributory to admission. Of the 31 ADEs, 28 (90.3%) were considered avoidable or potentially avoidable. After adjusting for age, sex, comorbidity, dementia, baseline activities of daily living function, and number of medications, the likelihood of a serious avoidable ADE increased significantly when a potentially inappropriate medication was prescribed (odds ratio, 11.18; 95% confidence interval [CI], 5.014 - 24.93; p < .001). The medications identified by STOPP criteria, are significantly associated with avoidable ADE in older people that cause or contribute to urgent hospitalization but contributed less towards morbidity and mortality. Findings of the study underscore the importance of preventing inappropriate prescribing.

Keywords: adverse drug events, appropriate prescribing, health services research

Procedia PDF Downloads 399
828 Maintenance of Non-Crop Plants Reduces Insect Pest Population in Tropical Chili Pepper Agroecosystems

Authors: Madelaine Venzon, Dany S. S. L. Amaral, André L. Perez, Natália S. Diaz, Juliana A. Martinez Chiguachi, Maira C. M. Fonseca, James D. Harwood, Angelo Pallini

Abstract:

Integrating strategies of sustainable crop production and promoting the provisioning of ecological services on farms and within rural landscapes is a challenge for today’s agriculture. Habitat management, through increasing vegetational diversity, enhances heterogeneity in agroecosystems and has the potential to improve the recruitment of natural enemies of pests, which promotes biological control services. In tropical agroecosystems, however, there is a paucity of information pertaining to the resources provided by associated plants and their interactions with natural enemies. The maintenance of non-crop plants integrated into and/or surrounding crop fields provides the farmer with a low-investment option to enhance biological control. We carried out field experiments in chili pepper agroecosystems with small stakeholders located in the Zona da Mata, State of Minas Gerais, Brazil, from 2011 to 2015 where we assessed: (a) whether non-crop plants within and around chili pepper fields affect the diversity and abundance of aphidophagous species; (b) whether there are direct interactions between non-crop plants and aphidophagous arthropods; and (c) the importance of non-crop plant resources for survival of Coccinellidae and Chrysopidae species. Aphidophagous arthropods were dominated by Coccinellidae, Neuroptera, Syrphidae, Anthocoridae and Araneae. These natural enemies were readily observed preying on aphids, feeding on flowers or extrafloral nectaries and using plant structures for oviposition and/or protection. Aphid populations were lower on chili pepper fields associated with non-crop plants that on chili pepper monocultures. Survival of larvae and adults of different species of Coccinellidae and Chrysopidae on non-crop resources varied according to the plant species. This research provides evidence that non-crop plants in chili pepper agroecosystems can affect aphid abundance and their natural enemy abundance and survival. It is also highlighting the need for further research to fully characterize the structure and function of plant resources in these and other tropical agroecosystems. Financial support: CNPq, FAPEMIG and CAPES (Brazil).

Keywords: Conservation biological control, aphididae, Coccinellidae, Chrysopidae, plant diversification

Procedia PDF Downloads 289
827 Influence of Natural Rubber on the Frictional and Mechanical Behavior of the Composite Brake Pad Materials

Authors: H. Yanar, G. Purcek, H. H. Ayar

Abstract:

The ingredients of composite materials used for the production of composite brake pads play an important role in terms of safety braking performance of automobiles and trains. Therefore, the ingredients must be selected carefully and used in appropriate ratios in the matrix structure of the brake pad materials. In the present study, a non-asbestos organic composite brake pad materials containing binder resin, space fillers, solid lubricants, and friction modifier was developed, and its fillers content was optimized by adding natural rubber with different rate into the specified matrix structure in order to achieve the best combination of tribo-performance and mechanical properties. For this purpose, four compositions with different rubber content (2.5wt.%, 5.0wt.%, 7.5wt.% and 10wt.%) were prepared and then test samples with the diameter of 20 mm and length of 15 mm were produced to evaluate the friction and mechanical behaviors of the mixture. The friction and wear tests were performed using a pin-on-disc type test rig which was designed according to NF-F-11-292 French standard. All test samples were subjected to two different types of friction tests defined as periodic braking and continuous braking (also known as fade test). In this way, the coefficient of friction (CoF) of composite sample with different rubber content were determined as a function of number of braking cycle and temperature of the disc surface. The results demonstrated that addition of rubber into the matrix structure of the composite caused a significant change in the CoF. Average CoF of the composite samples increased linearly with increasing rubber content into the matrix. While the average CoF was 0.19 for the rubber-free composite, the composite sample containing 20wt.% rubber had the maximum CoF of about 0.24. Although the CoF of composite sample increased, the amount of specific wear rate decreased with increasing rubber content into the matrix. On the other hand, it was observed that the CoF decreased with increasing temperature generated in-between sample and disk depending on the increasing rubber content. While the CoF decreased to the minimum value of 0.15 at 400 °C for the rubber-free composite sample, the sample having the maximum rubber content of 10wt.% exhibited the lowest one of 0.09 at the same temperature. Addition of rubber into the matrix structure decreased the hardness and strength of the samples. It was concluded from the results that the composite matrix with 5 wt.% rubber had the best composition regarding the performance parameters such as required frictional and mechanical behavior. This composition has the average CoF of 0.21, specific wear rate of 0.024 cm³/MJ and hardness value of 63 HRX.

Keywords: brake pad composite, friction and wear, rubber, friction materials

Procedia PDF Downloads 137
826 Targeting APP IRE mRNA to Combat Amyloid -β Protein Expression in Alzheimer’s Disease

Authors: Mateen A Khan, Taj Mohammad, Md. Imtaiyaz Hassan

Abstract:

Alzheimer’s disease is characterized by the accumulation of the processing products of the amyloid beta peptide cleaved by amyloid precursor protein (APP). Iron increases the synthesis of amyloid beta peptides, which is why iron is present in Alzheimer's disease patients' amyloid plaques. Iron misregulation in the brain is linked to the overexpression of APP protein, which is directly related to amyloid-β aggregation in Alzheimer’s disease. The APP 5'-UTR region encodes a functional iron-responsive element (IRE) stem-loop that represents a potential target for modulating amyloid production. Targeted regulation of APP gene expression through the modulation of 5’-UTR sequence function represents a novel approach for the potential treatment of AD because altering APP translation can be used to improve both the protective brain iron balance and provide anti-amyloid efficacy. The molecular docking analysis of APP IRE RNA with eukaryotic translation initiation factors yields several models exhibiting substantial binding affinity. The finding revealed that the interaction involved a set of functionally active residues within the binding sites of eIF4F. Notably, APP IRE RNA and eIF4F interaction were stabilized by multiple hydrogen bonds with residues of APP IRE RNA and eIF4F. It was evident that APP IRE RNA exhibited a structural complementarity that tightly fit within binding pockets of eIF4F. The simulation studies further revealed the stability of the complexes formed between RNA and eIF4F, which is crucial for assessing the strength of these interactions and subsequent roles in the pathophysiology of Alzheimer’s disease. In addition, MD simulations would capture conformational changes in the IRE RNA and protein molecules during their interactions, illustrating the mechanism of interaction, conformational change, and unbinding events and how it may affect aggregation propensity and subsequent therapeutic implications. Our binding studies correlated well with the translation efficiency of APP mRNA. Overall, the outcome of this study suggests that the genomic modification and/or inhibiting the expression of amyloid protein by targeting APP IRE RNA can be a viable strategy to identify potential therapeutic targets for AD and subsequently be exploited for developing novel therapeutic approaches.

Keywords: Alzheimer's disease, Protein-RNA interaction analysis, molecular docking simulations, conformational dynamics, binding stability, binding kinetics, protein synthesis.

Procedia PDF Downloads 64
825 Beyond the “Breakdown” of Karman Vortex Street

Authors: Ajith Kumar S., Sankaran Namboothiri, Sankrish J., SarathKumar S., S. Anil Lal

Abstract:

A numerical analysis of flow over a heated circular cylinder is done in this paper. The governing equations, Navier-Stokes, and energy equation within the Boussinesq approximation along with continuity equation are solved using hybrid FEM-FVM technique. The density gradient created due to the heating of the cylinder will induce buoyancy force, opposite to the direction of action of acceleration due to gravity, g. In the present work, the flow direction and the direction of buoyancy force are taken as same (vertical flow configuration), so that the buoyancy force accelerates the mean flow past the cylinder. The relative dominance of the buoyancy force over the inertia force is characterized by the Richardson number (Ri), which is one of the parameter that governs the flow dynamics and heat transfer in this analysis. It is well known that above a certain value of Reynolds number, Re (ratio of inertia force over the viscous forces), the unsteady Von Karman vortices can be seen shedding behind the cylinder. The shedding wake patterns could be seriously altered by heating/cooling the cylinder. The non-dimensional shedding frequency called the Strouhal number is found to be increasing as Ri increases. The aerodynamic force coefficients CL and CD are observed to change its value. In the present vertical configuration of flow over the cylinder, as Ri increases, shedding frequency gets increased and suddenly drops down to zero at a critical value of Richardson number. The unsteady vortices turn to steady standing recirculation bubbles behind the cylinder after this critical Richardson number. This phenomenon is well known in literature as "Breakdown of the Karman Vortex Street". It is interesting to see the flow structures on further increase in the Richardson number. On further heating of the cylinder surface, the size of the recirculation bubble decreases without loosing its symmetry about the horizontal axis passing through the center of the cylinder. The separation angle is found to be decreasing with Ri. Finally, we observed a second critical Richardson number, after which the the flow will be attached to the cylinder surface without any wake behind it. The flow structures will be symmetrical not only about the horizontal axis, but also with the vertical axis passing through the center of the cylinder. At this stage, there will be a "single plume" emanating from the rear stagnation point of the cylinder. We also observed the transition of the plume is a strong function of the Richardson number.

Keywords: drag reduction, flow over circular cylinder, flow control, mixed convection flow, vortex shedding, vortex breakdown

Procedia PDF Downloads 404
824 Potential Serological Biomarker for Early Detection of Pregnancy in Cows

Authors: Shveta Bathla, Preeti Rawat, Sudarshan Kumar, Rubina Baithalu, Jogender Singh Rana, Tushar Kumar Mohanty, Ashok Kumar Mohanty

Abstract:

Pregnancy is a complex process which includes series of events such as fertilization, formation of blastocyst, implantation of embryo, placental formation and development of fetus. The success of these events depends on various interactions which are synchronized by endocrine interaction between a receptive dam and competent embryo. These interactions lead to change in expression of hormones and proteins. But till date no protein biomarker is available which can be used to detect successful completion of these events. We employed quantitative proteomics approach to develop putative serological biomarker which has diagnostic applicability for early detection of pregnancy in cows. For this study, sera were collected from control (non-pregnant, n=6) and pregnant animals on successive days of pregnancy (7, 19, 45, n=6). The sera were subjected to depletion for removal of albumin using Norgen depletion kit. The tryptic peptides were labeled with iTRAQ. The peptides were pooled and fractionated using bRPLC over 80 min gradient. Then 12 fractions were injected to nLC for identification and quantitation in DDA mode using ESI. Identification using Mascot search revealed 2056 proteins out of which 352 proteins were differentially expressed. Twenty proteins were upregulated and twelve proteins were down-regulated with fold change > 1.5 and < 0.6 respectively (p < 0.05). The gene ontology studies of DEPs using Panther software revealed that majority of proteins are actively involved in catalytic activities, binding and enzyme regulatory activities. The DEP'S such as NF2, MAPK, GRIPI, UGT1A1, PARP, CD68 were further subjected to pathway analysis using KEGG and Cytoscape plugin Cluego that showed involvement of proteins in successful implantation, maintenance of pluripotency, regulation of luteal function, differentiation of endometrial macrophages, protection from oxidative stress and developmental pathways such as Hippo. Further efforts are continuing for targeted proteomics, western blot to validate potential biomarkers and development of diagnostic kit for early pregnancy diagnosis in cows.

Keywords: bRPLC, Cluego, ESI, iTRAQ, KEGG, Panther

Procedia PDF Downloads 461
823 Cascade Multilevel Inverter-Based Grid-Tie Single-Phase and Three-Phase-Photovoltaic Power System Controlling and Modeling

Authors: Syed Masood Hussain

Abstract:

An effective control method, including system-level control and pulse width modulation for quasi-Z-source cascade multilevel inverter (qZS-CMI) based grid-tie photovoltaic (PV) power system is proposed. The system-level control achieves the grid-tie current injection, independent maximum power point tracking (MPPT) for separate PV panels, and dc-link voltage balance for all quasi-Z-source H-bridge inverter (qZS-HBI) modules. A recent upsurge in the study of photovoltaic (PV) power generation emerges, since they directly convert the solar radiation into electric power without hampering the environment. However, the stochastic fluctuation of solar power is inconsistent with the desired stable power injected to the grid, owing to variations of solar irradiation and temperature. To fully exploit the solar energy, extracting the PV panels’ maximum power and feeding them into grids at unity power factor become the most important. The contributions have been made by the cascade multilevel inverter (CMI). Nevertheless, the H-bridge inverter (HBI) module lacks boost function so that the inverter KVA rating requirement has to be increased twice with a PV voltage range of 1:2; and the different PV panel output voltages result in imbalanced dc-link voltages. However, each HBI module is a two-stage inverter, and many extra dc–dc converters not only increase the complexity of the power circuit and control and the system cost, but also decrease the efficiency. Recently, the Z-source/quasi-Z-source cascade multilevel inverter (ZS/qZS-CMI)-based PV systems were proposed. They possess the advantages of both traditional CMI and Z-source topologies. In order to properly operate the ZS/qZS-CMI, the power injection, independent control of dc-link voltages, and the pulse width modulation (PWM) are necessary. The main contributions of this paper include: 1) a novel multilevel space vector modulation (SVM) technique for the single phase qZS-CMI is proposed, which is implemented without additional resources; 2) a grid-connected control for the qZS-CMI based PV system is proposed, where the all PV panel voltage references from their independent MPPTs are used to control the grid-tie current; the dual-loop dc-link peak voltage control.

Keywords: Quzi-Z source inverter, Photo voltaic power system, space vector modulation, cascade multilevel inverter

Procedia PDF Downloads 547
822 Achieving Flow at Work: An Experience Sampling Study to Comprehend How Cognitive Task Characteristics and Work Environments Predict Flow Experiences

Authors: Jonas De Kerf, Rein De Cooman, Sara De Gieter

Abstract:

For many decades, scholars have aimed to understand how work can become more meaningful by maximizing both potential and enhancing feelings of satisfaction. One of the largest contributions towards such positive psychology was made with the introduction of the concept of ‘flow,’ which refers to a condition in which people feel intense engagement and effortless action. Since then, valuable research on work-related flow has indicated that this state of mind is related to positive outcomes for both organizations (e.g., social, supportive climates) and workers (e.g., job satisfaction). Yet, scholars still do not fully comprehend how such deep involvement at work is obtained, given the notion that flow is considered a short-term, complex, and dynamic experience. Most research neglects that people who experience flow ought to be optimally challenged so that intense concentration is required. Because attention is at the core of this enjoyable state of mind, this study aims to comprehend how elements that affect workers’ cognitive functioning impact flow at work. Research on cognitive performance suggests that working on mentally demanding tasks (e.g., information processing tasks) requires workers to concentrate deeply, as a result leading to flow experiences. Based on social facilitation theory, working on such tasks in an isolated environment eases concentration. Prior research has indicated that working at home (instead of working at the office) or in a closed office (rather than in an open-plan office) impacts employees’ overall functioning in terms of concentration and productivity. Consequently, we advance such knowledge and propose an interaction by combining cognitive task characteristics and work environments among part-time teleworkers. Hence, we not only aim to shed light on the relation between cognitive tasks and flow but also provide empirical evidence that workers performing such tasks achieve the highest states of flow while working either at home or in closed offices. In July 2022, an experience-sampling study will be conducted that uses a semi-random signal schedule to understand how task and environment predictors together impact part-time teleworkers’ flow. More precisely, about 150 knowledge workers will fill in multiple surveys a day for two consecutive workweeks to report their flow experiences, cognitive tasks, and work environments. Preliminary results from a pilot study indicate that on a between level, tasks high in information processing go along with high self-reported fluent productivity (i.e., making progress). As expected, evidence was found for higher fluency in productivity for workers performing information processing tasks both at home and in a closed office, compared to those performing the same tasks at the office or in open-plan offices. This study expands the current knowledge on work-related flow by looking at a task and environmental predictors that enable workers to obtain such a peak state. While doing so, our findings suggest that practitioners should strive for ideal alignments between tasks and work locations to work with both deep involvement and gratification.

Keywords: cognitive work, office lay-out, work location, work-related flow

Procedia PDF Downloads 101