Search results for: wide slot
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3138

Search results for: wide slot

558 Sweet to Bitter Perception Parageusia: Case of Posterior Inferior Cerebellar Artery Territory Diaschisis

Authors: I. S. Gandhi, D. N. Patel, M. Johnson, A. R. Hirsch

Abstract:

Although distortion of taste perception following a cerebrovascular event may seem to be a frivolous consequence of a classic stroke presentation, altered taste perception places patients at an increased risk for malnutrition, weight loss, and depression, all of which negatively impact the quality of life. Impaired taste perception can result from a wide variety of cerebrovascular lesions to various locations, including pons, insular cortices, and ventral posteromedial nucleus of the thalamus. Wallenberg syndrome, also known as a lateral medullary syndrome, has been described to impact taste; however, specific sweet to bitter taste dysgeusia from a territory infarction is an infrequent event; as such, a case is presented. One year prior to presentation, this 64-year-old right-handed woman, suffered a right posterior inferior cerebellar artery aneurysm rupture with resultant infarction, culminating in a ventriculoperitoneal shunt placement. One and half months after this event, she noticed the gradual onset of lack of ability to taste sweet, to eventually all sweet food tasting bitter. Since the onset of her chemosensory problems, the patient has lost 60-pounds. Upon gustatory testing, the patient's taste threshold showed ageusia to sucrose and hydrochloric acid, while normogeusia to sodium chloride, urea, and phenylthiocarbamide. The gustatory cortex is made in part by the right insular cortex as well as the right anterior operculum, which are primarily involved in the sensory taste modalities. In this model, sweet is localized in the posterior-most along with the rostral aspect of the right insular cortex, notably adjacent to the region responsible for bitter taste. The sweet to bitter dysgeusia in our patient suggests the presence of a lesion in this localization. Although the primary lesion in this patient was located in the right medulla of the brainstem, neurodegeneration in the rostal and posterior-most aspect, of the right insular cortex may have occurred due to diaschisis. Diaschisis has been described as neurophysiological changes that occur in remote regions to a focal brain lesion. Although hydrocephalus and vasospasm due to aneurysmal rupture may explain the distal foci of impairment, the gradual onset of dysgeusia is more indicative of diaschisis. The perception of sweet, now tasting bitter, suggests that in the absence of sweet taste reception, the intrinsic bitter taste of food is now being stimulated rather than sweet. In the evaluation and treatment of taste parageusia secondary to cerebrovascular injury, prophylactic neuroprotective measures may be worthwhile. Further investigation is warranted.

Keywords: diaschisis, dysgeusia, stroke, taste

Procedia PDF Downloads 164
557 Immuno-Protective Role of Mucosal Delivery of Lactococcus lactis Expressing Functionally Active JlpA Protein on Campylobacter jejuni Colonization in Chickens

Authors: Ankita Singh, Chandan Gorain, Amirul I. Mallick

Abstract:

Successful adherence of the mucosal epithelial cells is the key early step for Campylobacter jejuni pathogenesis (C. jejuni). A set of Surface Exposed Colonization Proteins (SECPs) are among the major factors involved in host cell adherence and invasion of C. jejuni. Among them, constitutively expressed surface-exposed lipoprotein adhesin of C. jejuni, JlpA, interacts with intestinal heat shock protein 90 (hsp90α) and contributes in disease progression by triggering pro-inflammatory response via activation of NF-κB and p38 MAP kinase pathway. Together with its ability to express in the bacterial surface, higher sequence conservation and predicted predominance of several B cells epitopes, JlpA protein reserves its potential to become an effective vaccine candidate against wide range of Campylobacter sps including C. jejuni. Given that chickens are the primary sources for C. jejuni and persistent gut colonization remain as major cause for foodborne pathogenesis to humans, present study explicitly used chickens as model to test the immune-protective efficacy of JlpA protein. Taking into account that gastrointestinal tract is the focal site for C. jejuni colonization, to extrapolate the benefit of mucosal (intragastric) delivery of JlpA protein, a food grade Nisin inducible Lactic acid producing bacteria, Lactococcus lactis (L. lactis) was engineered to express recombinant JlpA protein (rJlpA) in the surface of the bacteria. Following evaluation of optimal surface expression and functionality of recombinant JlpA protein expressed by recombinant L. lactis (rL. lactis), the immune-protective role of intragastric administration of live rL. lactis was assessed in commercial broiler chickens. In addition to the significant elevation of antigen specific mucosal immune responses in the intestine of chickens that received three doses of rL. lactis, marked upregulation of Toll-like receptor 2 (TLR2) gene expression in association with mixed pro-inflammatory responses (both Th1 and Th17 type) was observed. Furthermore, intragastric delivery of rJlpA expressed by rL. lactis, but not the injectable form, resulted in a significant reduction in C. jejuni colonization in chickens suggesting that mucosal delivery of live rL. lactis expressing JlpA serves as a promising vaccine platform to induce strong immune-protective responses against C. jejuni in chickens.

Keywords: chickens, lipoprotein adhesion of Campylobacter jejuni, immuno-protection, Lactococcus lactis, mucosal delivery

Procedia PDF Downloads 123
556 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.

Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer

Procedia PDF Downloads 97
555 Stuttering Persistence in Children: Effectiveness of the Psicodizione Method in a Small Italian Cohort

Authors: Corinna Zeli, Silvia Calati, Marco Simeoni, Chiara Comastri

Abstract:

Developmental stuttering affects about 10% of preschool children; although the high percentage of natural recovery, a quarter of them will become an adult who stutters. An effective early intervention should help those children with high persistence risk for the future. The Psicodizione method for early stuttering is an Italian behavior indirect treatment for preschool children who stutter in which method parents act as good guides for communication, modeling their own fluency. In this study, we give a preliminary measure to evaluate the long-term effectiveness of Psicodizione method on stuttering preschool children with a high persistence risk. Among all Italian children treated with the Psicodizione method between 2018 and 2019, we selected 8 kids with at least 3 high risk persistence factors from the Illinois Prediction Criteria proposed by Yairi and Seery. The factors chosen for the selection were: one parent who stutters (1pt mother; 1.5pt father), male gender, ≥ 4 years old at onset; ≥ 12 months from onset of symptoms before treatment. For this study, the families were contacted after an average period of time of 14,7 months (range 3 - 26 months). Parental reports were gathered with a standard online questionnaire in order to obtain data reflecting fluency from a wide range of the children’s life situations. The minimum worthwhile outcome was set at "mild evidence" in a 5 point Likert scale (1 mild evidence- 5 high severity evidence). A second group of 6 children, among those treated with the Piscodizione method, was selected as high potential for spontaneous remission (low persistence risk). The children in this group had to fulfill all the following criteria: female gender, symptoms for less than 12 months (before treatment), age of onset <4 years old, none of the parents with persistent stuttering. At the time of this follow-up, the children were aged 6–9 years, with a mean of 15 months post-treatment. Among the children in the high persistence risk group, 2 (25%) hadn’t had stutter anymore, and 3 (37,5%) had mild stutter based on parental reports. In the low persistency risk group, the children were aged 4–6 years, with a mean of 14 months post-treatment, and 5 (84%) hadn’t had stutter anymore (for the past 16 months on average).62,5% of children at high risk of persistence after Psicodizione treatment showed mild evidence of stutter at most. 75% of parents confirmed a better fluency than before the treatment. The low persistence risk group seemed to be representative of spontaneous recovery. This study’s design could help to better evaluate the success of the proposed interventions for stuttering preschool children and provides a preliminary measure of the effectiveness of the Psicodizione method on high persistence risk children.

Keywords: early treatment, fluency, preschool children, stuttering

Procedia PDF Downloads 194
554 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System

Authors: Nareshkumar Harale, B. B. Meshram

Abstract:

The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.

Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design

Procedia PDF Downloads 215
553 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour

Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.

Keywords: artificial neural network, back-propagation, tide data, training algorithm

Procedia PDF Downloads 462
552 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 181
551 Spectroscopic (Ir, Raman, Uv-Vis) and Biological Study of Copper and Zinc Complexes and Sodium Salt with Cichoric Acid

Authors: Renata Swislocka, Grzegorz Swiderski, Agata Jablonska-Trypuc, Wlodzimierz Lewandowski

Abstract:

Forming a complex of a phenolic compound with a metal not only alters the physicochemical properties of the ligand (including increase in stability or changes in lipophilicity), but also its biological activity, including antioxidant, antimicrobial and many others. As part of our previous projects, we examined the physicochemical and antimicrobial properties of phenolic acids and their complexes with metals naturally occurring in foods. Previously we studied the complexes of manganese(II), copper(II), cadmium(II) and alkali metals with ferulic, caffeic and p-coumaric acids. In the framework of this study, the physicochemical and biological properties of cicoric acid, its sodium salt, and complexes with copper and zinc were investigated. Cichoric acid is a derivative of both caffeic acid and tartaric acid. It has first been isolated from Cichorium intybus (chicory) but also it occurs in significant amounts in Echinacea, particularly E. purpurea, dandelion leaves, basil, lemon balm and in aquatic plants, including algae and sea grasses. For the study of spectroscopic and biological properties of cicoric acid, its sodium salt, and complexes with zinc and copper a variety of methods were used. Studies of antioxidant properties were carried out in relation to selected stable radicals (method of reduction of DPPH and reduction of FRAP). As a result, the structure and spectroscopic properties of cicoric acid and its complexes with selected metals in the solid state and in the solutions were defined. The IR and Raman spectra of cicoric acid displayed a number of bands that were derived from vibrations of caffeic and tartaric acids moieties. At 1746 and 1716 cm-1 the bands assigned to the vibrations of the carbonyl group of tartaric acid occurred. In the spectra of metal complexes with cichoric these bands disappeared what indicated that metal ion was coordinated by the carboxylic groups of tartaric acid. In the spectra of the sodium salt, a characteristic wide-band vibrations of carboxylate anion occurred. In the spectra of cicoric acid and its salt and complexes, a number of bands derived from the vibrations of the aromatic ring (caffeic acid) were assigned. Upon metal-ligand attachment, the changes in the values of the wavenumbers of these bands occurred. The impact of metals on the antioxidant properties of cicoric acid was also examined. Cichoric acid has a high antioxidant potential. Complexation by metals (zinc, copper) did not significantly affect its antioxidant capacity. The work was supported by the National Science Centre, Poland (grant no. 2015/17/B/NZ9/03581).

Keywords: chicoric acid, metal complexes, natural antioxidant, phenolic acids

Procedia PDF Downloads 322
550 Winter – Not Spring - Climate Drives Annual Adult Survival in Common Passerines: A Country-Wide, Multi-Species Modeling Exercise

Authors: Manon Ghislain, Timothée Bonnet, Olivier Gimenez, Olivier Dehorter, Pierre-Yves Henry

Abstract:

Climatic fluctuations affect the demography of animal populations, generating changes in population size, phenology, distribution and community assemblages. However, very few studies have identified the underlying demographic processes. For short-lived species, like common passerine birds, are these changes generated by changes in adult survival or in fecundity and recruitment? This study tests for an effect of annual climatic conditions (spring and winter) on annual, local adult survival at very large spatial (a country, 252 sites), temporal (25 years) and biological (25 species) scales. The Constant Effort Site ringing has allowed the collection of capture - mark - recapture data for 100 000 adult individuals since 1989, over metropolitan France, thus documenting annual, local survival rates of the most common passerine birds. We specifically developed a set of multi-year, multi-species, multi-site Bayesian models describing variations in local survival and recapture probabilities. This method allows for a statistically powerful hierarchical assessment (global versus species-specific) of the effects of climate variables on survival. A major part of between-year variations in survival rate was common to all species (74% of between-year variance), whereas only 26% of temporal variation was species-specific. Although changing spring climate is commonly invoked as a cause of population size fluctuations, spring climatic anomalies (mean precipitation or temperature for March-August) do not impact adult survival: only 1% of between-year variation of species survival is explained by spring climatic anomalies. However, for sedentary birds, winter climatic anomalies (North Atlantic Oscillation) had a significant, quadratic effect on adult survival, birds surviving less during intermediate years than during more extreme years. For migratory birds, we do not detect an effect of winter climatic anomalies (Sahel Rainfall). We will analyze the life history traits (migration, habitat, thermal range) that could explain a different sensitivity of species to winter climate anomalies. Overall, we conclude that changes in population sizes for passerine birds are unlikely to be the consequences of climate-driven mortality (or emigration) in spring but could be induced by other demographic parameters, like fecundity.

Keywords: Bayesian approach, capture-recapture, climate anomaly, constant effort sites scheme, passerine, seasons, survival

Procedia PDF Downloads 280
549 Effect of Pollutions on Mangrove Forests of Nayband National Marine Park

Authors: Esmaeil Kouhgardi, Elaheh Shakerdargah

Abstract:

The mangrove ecosystem is a complex of various inter-related elements in the land-sea interface zone which is linked with other natural systems of the coastal region such as corals, sea-grass, coastal fisheries and beach vegetation. The mangrove ecosystem consists of water, muddy soil, trees, shrubs, and their associated flora, fauna and microbes. It is a very productive ecosystem sustaining various forms of life. Its waters are nursery grounds for fish, crustacean, and mollusk and also provide habitat for a wide range of aquatic life, while the land supports a rich and diverse flora and fauna, but pollutions may affect these characteristics. Iran has the lowest share of Persian Gulf pollution among the eight littoral states; environmental experts are still deeply concerned about the serious consequences of the pollution in the oil-rich gulf. Prolongation of critical conditions in the Persian Gulf has endangered its aquatic ecosystem. Water purification equipment, refineries, wastewater emitted by onshore installations, especially petrochemical plans, urban sewage, population density and extensive oil operations of Arab states are factors contaminating the Persian Gulf waters. Population density has been the major cause of pollution and environmental degradation in the Persian Gulf. Persian Gulf is a closed marine environment which is connected to open waterways only from one way. It usually takes between three and four years for the gulf's water to be completely replaced. Therefore, any pollution entering the water will remain there for a relatively long time. Presently, the high temperature and excessive salt level in the water have exposed the marine creatures to extra threats, which mean they have to survive very tough conditions. The natural environment of the Persian Gulf is very rich with good fish grounds, extensive coral reefs and pearl oysters in abundance, but has become increasingly under pressure due to the heavy industrialization and in particular the repeated major oil spillages associated with the various recent wars fought in the region. Pollution may cause the mortality of mangrove forests by effect on root, leaf and soil of the area. Study was showed the high correlation between industrial pollution and mangrove forests health in south of Iran and increase of population, coupled with economic growth, inevitably caused the use of mangrove lands for various purposes such as construction of roads, ports and harbors, industries and urbanization.

Keywords: Mangrove forest, pollution, Persian Gulf, population, environment

Procedia PDF Downloads 383
548 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 96
547 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing

Authors: Kedar Hardikar, Joe Varghese

Abstract:

Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applications

Keywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.

Procedia PDF Downloads 117
546 Thermal Comfort and Outdoor Urban Spaces in the Hot Dry City of Damascus, Syria

Authors: Lujain Khraiba

Abstract:

Recently, there is a broad recognition that micro-climate conditions contribute to the quality of life in urban spaces outdoors, both from economical and social viewpoints. The consideration of urban micro-climate and outdoor thermal comfort in urban design and planning processes has become one of the important aspects in current related studies. However, these aspects are so far not considered in urban planning regulations in practice and these regulations are often poorly adapted to the local climate and culture. Therefore, there is a huge need to adapt the existing planning regulations to the local climate especially in cities that have extremely hot weather conditions. The overall aim of this study is to point out the complexity of the relationship between urban planning regulations, urban design, micro-climate and outdoor thermal comfort in the hot dry city of Damascus, Syria. The main aim is to investigate the temporal and spatial effects of micro-climate on urban surface temperatures and outdoor thermal comfort in different urban design patterns as a result of urban planning regulations during the extreme summer conditions. In addition, studying different alternatives of how to mitigate the surface temperature and thermal stress is also a part of the aim. The novelty of this study is to highlight the combined effect of urban surface materials and vegetation to develop the thermal environment. This study is based on micro-climate simulations using ENVI-met 3.1. The input data is calibrated according to a micro-climate fieldwork that has been conducted in different urban zones in Damascus. Different urban forms and geometries including the old and the modern parts of Damascus are thermally evaluated. The Physiological Equivalent Temperature (PET) index is used as an indicator for outdoor thermal comfort analysis. The study highlights the shortcomings of existing planning regulations in terms of solar protection especially at street levels. The results show that the surface temperatures in Old Damascus are lower than in the modern part. This is basically due to the difference in urban geometries that prevent the solar radiation in Old Damascus to reach the ground and heat up the surface whereas in modern Damascus, the streets are prescribed as wide spaces with high values of Sky View Factor (SVF is about 0.7). Moreover, the canyons in the old part are paved in cobblestones whereas the asphalt is the main material used in the streets of modern Damascus. Furthermore, Old Damascus is less stressful than the modern part (the difference in PET index is about 10 °C). The thermal situation is enhanced when different vegetation are considered (an improvement of 13 °C in the surface temperature is recorded in modern Damascus). The study recommends considering a detailed landscape code at street levels to be integrated in urban regulations of Damascus in order to achieve a better urban development in harmony with micro-climate and comfort. Such strategy will be very useful to decrease the urban warming in the city.

Keywords: micro-climate, outdoor thermal comfort, urban planning regulations, urban spaces

Procedia PDF Downloads 466
545 Applicability and Reusability of Fly Ash and Base Treated Fly Ash for Adsorption of Catechol from Aqueous Solution: Equilibrium, Kinetics, Thermodynamics and Modeling

Authors: S. Agarwal, A. Rani

Abstract:

Catechol is a natural polyphenolic compound that widely exists in higher plants such as teas, vegetables, fruits, tobaccos, and some traditional Chinese medicines. The fly ash-based zeolites are capable of absorbing a wide range of pollutants. But the process of zeolite synthesis is time-consuming and requires technical setups by the industries. The marketed costs of zeolites are quite high restricting its use by small-scale industries for the removal of phenolic compounds. The present research proposes a simple method of alkaline treatment of FA to produce an effective adsorbent for catechol removal from wastewater. The experimental parameter such as pH, temperature, initial concentration and adsorbent dose on the removal of catechol were studied in batch reactor. For this purpose the adsorbent materials were mixed with aqueous solutions containing catechol ranging in 50 – 200 mg/L initial concentrations and then shaken continuously in a thermostatic Orbital Incubator Shaker at 30 ± 0.1 °C for 24 h. The samples were withdrawn from the shaker at predetermined time interval and separated by centrifugation (Centrifuge machine MBL-20) at 2000 rpm for 4 min. to yield a clear supernatant for analysis of the equilibrium concentrations of the solutes. The concentrations were measured with Double Beam UV/Visible spectrophotometer (model Spectrscan UV 2600/02) at the wavelength of 275 nm for catechol. In the present study, the use of low-cost adsorbent (BTFA) derived from coal fly ash (FA), has been investigated as a substitute of expensive methods for the sequestration of catechol. The FA and BTFA adsorbents were well characterized by XRF, FE-SEM with EDX, FTIR, and surface area and porosity measurement which proves the chemical constituents, functional groups and morphology of the adsorbents. The catechol adsorption capacities of synthesized BTFA and native material were determined. The adsorption was slightly increased with an increase in pH value. The monolayer adsorption capacities of FA and BTFA for catechol were 100 mg g⁻¹ and 333.33 mg g⁻¹ respectively, and maximum adsorption occurs within 60 minutes for both adsorbents used in this test. The equilibrium data are fitted by Freundlich isotherm found on the basis of error analysis (RMSE, SSE, and χ²). Adsorption was found to be spontaneous and exothermic on the basis of thermodynamic parameters (ΔG°, ΔS°, and ΔH°). Pseudo-second-order kinetic model better fitted the data for both FA and BTFA. BTFA showed large adsorptive characteristics, high separation selectivity, and excellent recyclability than FA. These findings indicate that BTFA could be employed as an effective and inexpensive adsorbent for the removal of catechol from wastewater.

Keywords: catechol, fly ash, isotherms, kinetics, thermodynamic parameters

Procedia PDF Downloads 111
544 Spatial Direct Numerical Simulation of Instability Waves in Hypersonic Boundary Layers

Authors: Jayahar Sivasubramanian

Abstract:

Understanding laminar-turbulent transition process in hyper-sonic boundary layers is crucial for designing viable high speed flight vehicles. The study of transition becomes particularly important in the high speed regime due to the effect of transition on aerodynamic performance and heat transfer. However, even after many years of research, the transition process in hyper-sonic boundary layers is still not understood. This lack of understanding of the physics of the transition process is a major impediment to the development of reliable transition prediction methods. Towards this end, spatial Direct Numerical Simulations are conducted to investigate the instability waves generated by a localized disturbance in a hyper-sonic flat plate boundary layer. In order to model a natural transition scenario, the boundary layer was forced by a short duration (localized) pulse through a hole on the surface of the flat plate. The pulse disturbance developed into a three-dimensional instability wave packet which consisted of a wide range of disturbance frequencies and wave numbers. First, the linear development of the wave packet was studied by forcing the flow with low amplitude (0.001% of the free-stream velocity). The dominant waves within the resulting wave packet were identified as two-dimensional second mode disturbance waves. Hence the wall-pressure disturbance spectrum exhibited a maximum at the span wise mode number k = 0. The spectrum broadened in downstream direction and the lower frequency first mode oblique waves were also identified in the spectrum. However, the peak amplitude remained at k = 0 which shifted to lower frequencies in the downstream direction. In order to investigate the nonlinear transition regime, the flow was forced with a higher amplitude disturbance (5% of the free-stream velocity). The developing wave packet grows linearly at first before reaching the nonlinear regime. The wall pressure disturbance spectrum confirmed that the wave packet developed linearly at first. The response of the flow to the high amplitude pulse disturbance indicated the presence of a fundamental resonance mechanism. Lower amplitude secondary peaks were also identified in the disturbance wave spectrum at approximately half the frequency of the high amplitude frequency band, which would be an indication of a sub-harmonic resonance mechanism. The disturbance spectrum indicates, however, that fundamental resonance is much stronger than sub-harmonic resonance.

Keywords: boundary layer, DNS, hyper sonic flow, instability waves, wave packet

Procedia PDF Downloads 171
543 Polymeric Composites with Synergetic Carbon and Layered Metallic Compounds for Supercapacitor Application

Authors: Anukul K. Thakur, Ram Bilash Choudhary, Mandira Majumder

Abstract:

In this technologically driven world, it is requisite to develop better, faster and smaller electronic devices for various applications to keep pace with fast developing modern life. In addition, it is also required to develop sustainable and clean sources of energy in this era where the environment is being threatened by pollution and its severe consequences. Supercapacitor has gained tremendous attention in the recent years because of its various attractive properties such as it is essentially maintenance-free, high specific power, high power density, excellent pulse charge/discharge characteristics, exhibiting a long cycle-life, require a very simple charging circuit and safe operation. Binary and ternary composites of conducting polymers with carbon and other layered transition metal dichalcogenides have shown tremendous progress in the last few decades. Compared with bulk conducting polymer, these days conducting polymers have gained more attention because of their high electrical conductivity, large surface area, short length for the ion transport and superior electrochemical activity. These properties make them very suitable for several energy storage applications. On the other hand, carbon materials have also been studied intensively, owing to its rich specific surface area, very light weight, excellent chemical-mechanical property and a wide range of the operating temperature. These have been extensively employed in the fabrication of carbon-based energy storage devices and also as an electrode material in supercapacitors. Incorporation of carbon materials into the polymers increases the electrical conductivity of the polymeric composite so formed due to high electrical conductivity, high surface area and interconnectivity of the carbon. Further, polymeric composites based on layered transition metal dichalcogenides such as molybdenum disulfide (MoS2) are also considered important because they are thin indirect band gap semiconductors with a band gap around 1.2 to 1.9eV. Amongst the various 2D materials, MoS2 has received much attention because of its unique structure consisting of a graphene-like hexagonal arrangement of Mo and S atoms stacked layer by layer to give S-Mo-S sandwiches with weak Van-der-Waal forces between them. It shows higher intrinsic fast ionic conductivity than oxides and higher theoretical capacitance than the graphite.

Keywords: supercapacitor, layered transition-metal dichalcogenide, conducting polymer, ternary, carbon

Procedia PDF Downloads 235
542 Prenatal Genetic Screening and Counselling Competency Challenges of Nurse-Midwife

Authors: Girija Madhavanprabhakaran, Frincy Franacis, Sheeba Elizabeth John

Abstract:

Introduction: A wide range of prenatal genetic screening is introduced with increasing incidences of congenital anomalies even in low-risk pregnancies and is an emerging standard of care. Being frontline caretakers, the role and responsibilities of nurses and midwives are critical as they are working along with couples to provide evidence-based supportive educative care. The increasing genetic disorders and advances in prenatal genetic screening with limited genetic counselling facilities urge nurses and midwifery nurses with essential competencies to help couples to take informed decision. Objective: This integrative literature review aimed to explore nurse midwives’ knowledge and role in prenatal screening and genetic counselling competency and the challenges faced by them to cater to all pregnant women to empower their autonomy in decision making and ensuring psychological comfort. Method: An electronic search using keywords prenatal screening, genetic counselling, prenatal counselling, nurse midwife, nursing education, genetics, and genomics were done in the PUBMED, SCOPUS and Medline, Google Scholar. Finally, based on inclusion criteria, 8 relevant articles were included. Results: The main review results suggest that nurses and midwives lack essential support, knowledge, or confidence to be able to provide genetic counselling and help the couples ethically to ensure client autonomy and decision making. The majority of nurses and midwives reported inadequate levels of knowledge on genetic screening and their roles in obtaining family history, pedigrees, and providing genetic information for an affected client or high-risk families. The deficiency of well-recognized and influential clinical academic midwives in midwifery practice is also reported. Evidence recommended to update and provide sound educational training to improve nurse-midwife competence and confidence. Conclusion: Overcoming the challenges to achieving informed choices about fetal anomaly screening globally is a major concern. Lack of adequate knowledge and counselling competency, communication insufficiency, need for education and policy are major areas to address. Prenatal nurses' and midwives’ knowledge on prenatal genetic screening and essential counselling competencies can ensure services to the majority of pregnant women around the globe to be better-informed decision-makers and enhances their autonomy, and reduces ethical dilemmas.

Keywords: challenges, genetic counselling, prenatal screening, prenatal counselling

Procedia PDF Downloads 178
541 A Comparative Study of Environmental, Social and Economic Cross-Border Cooperation in Post-Conflict Environments: The Israel-Jordan Border

Authors: Tamar Arieli

Abstract:

Cross-border cooperation has long been hailed as a means for stabilizing and normalizing relations between former enemies. Cooperation in problem-solving and realizing of local interests in post-conflict environments can indeed serve as a basis for developing dialogue and meaningful relations between neighbors across borders. Hence the potential for formerly sealed borders to serve as a basis for generating local and national perceptions of interdependence and as a buffer against the resume of conflict. Central questions which arise for policy-makers and third parties are how to facilitate cross-border cooperation and which areas of cooperation best serve to normalize post-conflict border regions. The Israel-Jordan border functions as a post-conflict border, in that it is a peaceful border since the 1994 Israel-Jordan peace treaty yet cross-border relations are defined but the highly securitized nature of the border region and the ongoing Arab-Israel regional conflict. This case study is based on long term qualitative research carried out in the border regions of both Israel and Jordan, which mapped and analyzed cross-border in a wide range of activities – social interactions sponsored by peace-facilitating NGOs, government sponsored agricultural cooperation, municipal initiated emergency planning in cross-border continuous urban settings, private cross-border business ventures and various environmental cooperative initiatives. These cooperative initiatives are evaluated through multiple interviews carried out with initiators and partners in cross-border cooperation as well as analysis of documentation, funding and media. These cooperative interactions are compared based on levels of cross-border local and official awareness and involvement as well as sustainability over time. This research identifies environmental cooperation as the most sustainable area of cross- border cooperation and as most conducive to generating perceptions of regional interdependence. This is a variation to the ‘New Middle East’ vision of business-based cooperation leading to conflict amelioration and regional stability. Environmental cooperation serving the public good rather than personal profit enjoys social legitimization even in the face of widespread anti-normalization sentiments common in the post-conflict environment. This insight is examined in light of philosophical and social aspects of the natural environment and its social perceptions. This research has theoretical implications for better understanding dynamics of cooperation and conflict, as well as practical ramifications for practitioners in border region policy and management.

Keywords: borders, cooperation, post-conflict, security

Procedia PDF Downloads 293
540 Bis-Azlactone Based Biodegradable Poly(Ester Amide)s: Design, Synthesis and Study

Authors: Kobauri Sophio, Kantaria Tengiz, Tugushi David, Puiggali Jordi, Katsarava Ramaz

Abstract:

Biodegradable biomaterials (BB) are of high interest for numerous applications in modern medicine as resorbable surgical materials and drug delivery systems. This kind of materials can be cleared from the body after the fulfillment of their function that excludes a surgical intervention for their removal. One of the most promising BBare amino acids based biodegradable poly(ester amide)s (PEAs) which are composed of naturally occurring (α-amino acids) and non-toxic building blocks such as fatty diols and dicarboxylic acids. Key bis-nucleophilic monomers for synthesizing the PEAs are diamine-diesters-di-p-toluenesulfonic acid salts of bis-(α-amino acid)-alkylenediesters (TAADs) which form the PEAs after step-growth polymerization (polycondensation) with bis-electrophilic counter-partners - activated diesters of dicarboxylic acids. The PEAs combine all advantages of the 'parent polymers' – polyesters (PEs) and polyamides (PAs): Ability of biodegradation (PEs), a high affinity with tissues and a wide range of desired mechanical properties (PAs). The scopes of applications of thePEAs can substantially be expanded by their functionalization, e.g. through the incorporation of hydrophobic fragments into the polymeric backbones. Hydrophobically modified PEAs can form non-covalent adducts with various compounds that make them attractive as drug carriers. For hydrophobic modification of the PEAs, we selected so-called 'Azlactone Method' based on the application of p-phenylene-bis-oxazolinons (bis-azlactones, BALs) as active bis-electrophilic monomers in step-growth polymerization with TAADs. Interaction of BALs with TAADs resulted in the PEAs with low MWs (Mw2,800-19,600 Da) and poor material properties. The high-molecular-weight PEAs (Mw up to 100,000) with desirable material properties were synthesized after replacement of a part of BALs with activated diester - di-p-nitrophenylsebacate, or a part of TAAD with alkylenediamine – 1,6-hexamethylenediamine. The new hydrophobically modified PEAs were characterized by FTIR, NMR, GPC, and DSC. It was shown that after the hydrophobic modification the PEAs retain the biodegradability (in vitro study catalyzed by α-chymptrypsin and lipase), and are of interest for constructing resorbable surgical and pharmaceutical devices including drug delivering containers such as microspheres. The new PEAs are insoluble in hydrophobic organic solvents such as chloroform or dichloromethane (swell only) that allowed elaborating a new technology of fabricating microspheres.

Keywords: amino acids, biodegradable polymers, bis-azlactones, microspheres

Procedia PDF Downloads 164
539 An Analysis of LoRa Networks for Rainforest Monitoring

Authors: Rafael Castilho Carvalho, Edjair de Souza Mota

Abstract:

As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.

Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest

Procedia PDF Downloads 65
538 The Relationship between Incidental Emotions, Risk Perceptions and Type of Army Service

Authors: Sharon Garyn-Tal, Shoshana Shahrabani

Abstract:

Military service in general, and in combat units in particular, can be physically and psychologically stressful. Therefore, type of service may have significant implications for soldiers during and after their military service including emotions, judgments and risk perceptions. Previous studies have focused on risk propensity and risky behavior among soldiers, however there is still lack of knowledge on the impact of type of army service on risk perceptions. The current study examines the effect of type of army service (combat versus non-combat service) and negative incidental emotions on risk perceptions. In 2014 a survey was conducted among 153 combat and non-combat Israeli soldiers. The survey was distributed in train stations and central bus stations in various places in Israel among soldiers waiting for the train/bus. Participants answered questions related to the levels of incidental negative emotions they felt, to their risk perceptions (chances to be hurt by terror attack, by violent crime and by car accident), and personal details including type of army service. The data in this research is unique because military service in Israel is compulsory, so that the Israeli population serving in the army is wide and diversified. The results indicate that currently serving combat participants were more pessimistic in their risk perceptions (for all type of risks) compared to the currently serving non-combat participants. Since combat participants probably experienced severe and distressing situations during their service, they became more pessimistic regarding their probabilities of being hurt in different situations in life. This result supports the availability heuristic theory and the findings of previous studies indicating that those who directly experience distressing events tend to overestimate danger. The findings also indicate that soldiers who feel higher levels of incidental fear and anger have pessimistic risk perceptions. In addition, respondents who experienced combat army service also have pessimistic risk perceptions if they feel higher levels of fear. In addition, the findings suggest that higher levels of the incidental emotions of fear and anger are related to more pessimistic risk perceptions. These results can be explained by the compulsory army service in Israel that constitutes a focused threat to soldiers' safety during their period of service. Thus, in this stressful environment, negative incidental emotions even during routine times correlate with higher risk perceptions. In conclusion, the current study results suggest that combat army service shapes risk perceptions and the way young people control their negative incidental emotions in everyday life. Recognizing the factors affecting risk perceptions among soldiers is important for better understanding the impact of army service on young people.

Keywords: army service, combat soldiers, incidental emotions, risk perceptions

Procedia PDF Downloads 220
537 Dangerous Words: A Moral Economy of HIV/AIDS in Swaziland

Authors: Robin Root

Abstract:

A fundamental premise of medical anthropology is that clinical phenomena are simultaneously cultural, political, and economic: none more so than the linked acronyms HIV/AIDS. For the medical researcher, HIV/AIDS signals an epidemiological pandemic and a pathophysiology. For persons diagnosed with an HIV-related condition, the acronym often conjures dread, too often marking and marginalizing the afflicted irretrievably. Critical medical anthropology is uniquely equipped to theorize the linkages that bind individual and social wellbeing to global structural and culture-specific phenomena. This paper reports findings from an anthropological study of HIV/AIDS in Swaziland, site of the highest HIV prevalence in the world. The project, initiated in 2005, has documented experiences of HIV/AIDS, religiosity, and treatment and care as well as drought and famine. Drawing on interviews with Swazi religious and traditional leaders about their experiences of leadership amidst worsening economic conditions, environmental degradation, and an ongoing global health crisis, the paper provides uncommon insights for global health practitioners whose singular paradigm for designing and delivering interventions is biomedically-based. In contrast, this paper details the role of local leaders in mediating extreme social suffering and resilience in ways that medical science cannot model but which radically impact how sickness is experienced and health services are delivered and accessed. Two concepts help to organize the paper’s argument. First, a ‘moral economy of language’ is central to showing up the implicit ‘technologies of knowledge’ that inhere in scientific and religious discourses of HIV/AIDS; people draw upon these discourses strategically to navigate highly vulnerable conditions. Second, Paulo Freire’s ethnographic focus on a culture’s 'dangerous words' opens up for examination how ‘sex’ is dangerous for religion and ‘god’ is dangerous for science. The paper interrogates hegemonic and ‘lived’ discourses, both biomedical and religious, and contributes to an important literature on the moral economies of health, a framework of explication and, importantly, action appropriate to a wide-range of contemporary global health phenomena. The paper concludes by asserting that it is imperative that global health planners reflect upon and ‘check’ their hegemonic policy platforms by, one, collaborating with local authoritative agents of ‘what sickness means and how it is best treated,’ and, two, taking account of the structural barriers to achieving good health.

Keywords: Africa, biomedicine, HIV/AIDS, qualitative research , religion

Procedia PDF Downloads 97
536 Structure Clustering for Milestoning Applications of Complex Conformational Transitions

Authors: Amani Tahat, Serdal Kirmizialtin

Abstract:

Trajectory fragment methods such as Markov State Models (MSM), Milestoning (MS) and Transition Path sampling are the prime choice of extending the timescale of all atom Molecular Dynamics simulations. In these approaches, a set of structures that covers the accessible phase space has to be chosen a priori using cluster analysis. Structural clustering serves to partition the conformational state into natural subgroups based on their similarity, an essential statistical methodology that is used for analyzing numerous sets of empirical data produced by Molecular Dynamics (MD) simulations. Local transition kernel among these clusters later used to connect the metastable states using a Markovian kinetic model in MSM and a non-Markovian model in MS. The choice of clustering approach in constructing such kernel is crucial since the high dimensionality of the biomolecular structures might easily confuse the identification of clusters when using the traditional hierarchical clustering methodology. Of particular interest, in the case of MS where the milestones are very close to each other, accurate determination of the milestone identity of the trajectory becomes a challenging issue. Throughout this work we present two cluster analysis methods applied to the cis–trans isomerism of dinucleotide AA. The choice of nucleic acids to commonly used proteins to study the cluster analysis is two fold: i) the energy landscape is rugged; hence transitions are more complex, enabling a more realistic model to study conformational transitions, ii) Nucleic acids conformational space is high dimensional. A diverse set of internal coordinates is necessary to describe the metastable states in nucleic acids, posing a challenge in studying the conformational transitions. Herein, we need improved clustering methods that accurately identify the AA structure in its metastable states in a robust way for a wide range of confused data conditions. The single linkage approach of the hierarchical clustering available in GROMACS MD-package is the first clustering methodology applied to our data. Self Organizing Map (SOM) neural network, that also known as a Kohonen network, is the second data clustering methodology. The performance comparison of the neural network as well as hierarchical clustering method is studied by means of computing the mean first passage times for the cis-trans conformational rates. Our hope is that this study provides insight into the complexities and need in determining the appropriate clustering algorithm for kinetic analysis. Our results can improve the effectiveness of decisions based on clustering confused empirical data in studying conformational transitions in biomolecules.

Keywords: milestoning, self organizing map, single linkage, structure clustering

Procedia PDF Downloads 205
535 Findings: Impact of a Sustained Health Promoting Workplace on Stock Price Performance and Beta; A Singapore Case

Authors: Wee Tong Liaw, Elaine Wong Yee Sing

Abstract:

The main objective and focus of this study are to establish the significance of a sustained health promoting workplace on stock and portfolio returns focusing on companies listed on the Singapore stock exchange, using a two-factor model comprising of the single factor CAPM and a 'health promoting workplace' factor. The 'health promoting workplace' factor represents the excess returns derived between two portfolios of component stocks that, when combined, would represent a top tier stock market index in Singapore, namely the STI index. The first portfolio represents companies that are independently assessed by the Singapore’s Health Award, SHA, to have a sustained and comprehensive health promoting workplace (SHA-STI portfolio) and the second portfolio represents companies that had not been independently assessed (Non-SHA STI portfolio). Since 2001, many companies in Singapore have voluntarily participated in the bi-annual Singapore HEALTH Award initiated by the Health Promotion Board of Singapore (HPB). The Singapore HEALTH Award (SHA), is an industry-wide award and assessment process. SHA assesses and recognizes employers in Singapore for implementing a comprehensive and sustainable health promotion programme at their workplaces. When using a ten year holding period instead of a one year holding period, excess returns in the SHA-STI portfolio over Non-SHA STI portfolio were consistently being observed over all test periods, during 2001 to 2013. In addition, when applied to the SHA-STI portfolio, results from the Two Factor Model consistently revealed higher explanatory powers across all test periods for the portfolio as well as all the individual component stocks in SHA-STI portfolio, than the single factor CAPM model. However, with respect to attaining higher level of achievement in the Singapore Health Award, this study did not show any incentive for selecting listed companies that have achieved a higher level of award. Results from this study would give further insights to investors and fund managers alike who intend to consider health promoting workplace as a risk factor in their stock or portfolio selection process, in particular for investors who have a preference for STI’s component stocks and with a longer investment horizon. Key micro factors like management abilities, business development strategies and production capabilities that meet the needs of market would create the demand for a company’s product(s) or service(s) and consequently contribute to its top line and profitability. Thereafter, the existence of a sustainable health promoting workplace would be a key catalytic factor in sustaining a productive workforce needed to support the continued success of a profitable business.

Keywords: asset pricing model, company's performance, stock returns, financial risk factor, sustained health promoting workplace

Procedia PDF Downloads 153
534 The Democracy of Love and Suffering in the Erotic Epigrams of Meleager

Authors: Carlos A. Martins de Jesus

Abstract:

The Greek anthology, first put together in the tenth century AD, gathers in two separate books a large number of epigrams devoted to love and its consequences, both of hetero (book V) and homosexual (book XII) nature. While some poets wrote epigrams of only one genre –that is the case of Strato (II cent. BC), the organizer of a wide-spread garland of homosexual epigrams –, several others composed within both categories, often using the same topics of love and suffering. Using Plato’s theorization of two different kinds of Eros (Symp. 180d-182a), the popular (pandemos) and the celestial (ouranios), homoerotic epigrammatic love is more often associated with the first one, while heterosexual poetry tends to be connected to a higher form of love. This paper focuses on the epigrammatic production of a single first-century BC poet, Meleager, aiming to look for the similarities and differences on singing both kinds of love. From Meleager, the Greek Anthology –a garland whose origins have been traced back to the poet’s garland itself– preserves more than sixty heterosexual and 48 homosexual epigrams, an important and unprecedented amount of poems that are able to trace a complete profile of his way of singing love. Meleager’s poetry deals with personal experience and emotions, frequently with love and the unhappiness that usually comes from it. Most times he describes himself not as an active and engaged lover, but as one struck by the beauty of a woman or boy, i.e., in a stage prior to erotic consummation. His epigrams represent the unreal and fantastic (literally speaking) world of the lover, in which the imagery and wordplays are used to convey emotion in the epigrams of both genres. Elsewhere Meleager surprises the reader by offering a surrealist or dreamlike landscape where everyday adventures are transcribed into elaborate metaphors for erotic feeling. For instance, in 12.81, the lovers are shipwrecked, and as soon as they have disembarked, they are promptly kidnapped by a figure who is both Eros and a beautiful boy. Particularly –and worth-to-know why significant – in the homosexual poems collected in Book XII, mythology also plays an important role, namely in the figure and the scene of Ganimedes’ kidnap by Zeus for his royal court (12. 70, 94). While mostly refusing the Hellenistic model of dramatic love epigram, in which a small everyday scene is portrayed –and 5. 182 is a clear exception to this almost rule –, Meleager actually focuses on the tumultuous inside of his (poetic) lovers, in the realm of a subject that feels love and pain far beyond his/her erotic preferences. In relation to loving and suffering –mostly suffering, it has to be said –, Meleager’s love is therefore completely democratic. There is no real place in his epigrams for the traditional association mentioned before between homoeroticism and a carnal-erotic-pornographic love, while the heterosexual one being more evenly and pure, so to speak.

Keywords: epigram, erotic epigram, Greek Anthology, Meleager

Procedia PDF Downloads 237
533 Downward Vertical Evacuation for Disabilities People from Tsunami Using Escape Bunker Technology

Authors: Febrian Tegar Wicaksana, Niqmatul Kurniati, Surya Nandika

Abstract:

Indonesia is one of the countries that have great number of disaster occurrence and threat because it is located in not only between three tectonic plates such as Eurasia plates, Indo-Australia plates and Pacific plates, but also in the Ring of Fire path, like earthquake, Tsunami, volcanic eruption and many more. Recently, research shows that there are potential areas that will be devastated by Tsunami in southern coast of Java. Tsunami is a series of waves in a body of water caused by the displacement of a large volume of water, generally in an ocean. When the waves enter shallow water, they may rise to several feet or, in rare cases, tens of feet, striking the coast with devastating force. The parameter for reference such as magnitude, the depth of epicentre, distance between epicentres with land, the depth of every points, when reached the shore and the growth of waves. Interaction between parameters will bring the big variance of Tsunami wave. Based on that, we can formulate preparation that needed for disaster mitigation strategies. The mitigation strategies will take the important role in an effort to reduce the number of victims and damage in the area. It will reduce the number of victim and casualties. Reducing is directed to the most difficult mobilization casualties in the tsunami disaster area like old people, sick people and disabilities people. Until now, the method that used for rescuing people from Tsunami is basic horizontal evacuation. This evacuation system is not optimal because it needs so long time and it cannot be used by people with disabilities. The writers propose to create a vertical evacuation model with an escape bunker system. This bunker system is chosen because the downward vertical evacuation is considered more efficient and faster. Especially in coastal areas without any highlands surround it. The downward evacuation system is better than upward evacuation because it can avoid the risk of erosion at the ground around the structure which can affect the building. The structure of the bunker and the evacuation process while, and even after, disaster are the main priority to be considered. The power of bunker has quake’s resistance, the durability from water stream, variety of interaction to the ground, and waterproof design. When the situation is back to normal, victim and casualties can go into the safer place. The bunker will be located near the hospital and public places, and will have wide entrance supported by large slide in it so it will ease the disabilities people. The technology of the escape bunker system is expected to reduce the number of victims who have low mobility in the Tsunami.

Keywords: escape bunker, tsunami, vertical evacuation, mitigation, disaster management

Procedia PDF Downloads 474
532 Digital Value Co-Creation: The Case of Worthy a Virtual Collaborative Museum across Europe

Authors: Camilla Marini, Deborah Agostino

Abstract:

Cultural institutions provide more than service-based offers; indeed, they are experience-based contexts. A cultural experience is a special event that encompasses a wide range of values which, for visitors, are primarily cultural rather than economic and financial. Cultural institutions have always been characterized by inclusivity and participatory practices, but the upcoming of digital technologies has put forward their interest in collaborative practices and the relationship with their audience. Indeed, digital technologies highly affected the cultural experience as it was conceived. Especially, museums, as traditional and authoritative cultural institutions, have been highly challenged by digital technologies. They shifted by a collection-oriented toward a visitor-centered approach, and digital technologies generated a highly interactive ecosystem in which visitors have an active role, shaping their own cultural experience. Most of the studies that investigate value co-creation in museums adopt a single perspective which is separately one of the museums or one of the users, but the analysis of the convergence/divergence of these perspectives is still emphasized. Additionally, many contributions focus on digital value co-creation as an outcome rather than as a process. The study aims to provide a joint perspective on digital value co-creation which include both museum and visitors. Also, it deepens the contribution of digital technologies in the value co-creation process, addressing the following research questions: (i) what are the convergence/divergence drivers on digital value co-creation and (ii) how digital technologies can be means of value co-creation? The study adopts an action research methodology that is based on the case of WORTHY, an educational project which involves cultural institutions and schools all around Europe, creating a virtual collaborative museum. It represents a valuable case for the aim of the study since it has digital technologies at its core, and the interaction through digital technologies is fundamental, all along with the experience. Action research has been identified as the most appropriate methodology for researchers to have direct contact with the field. Data have been collected through primary and secondary sources. Cultural mediators such as museums, teachers and students’ families have been interviewed, while a focus group has been designed to interact with students, investigating all the aspects of the cultural experience. Secondary sources encompassed project reports and website contents in order to deepen the perspective of cultural institutions. Preliminary findings highlight the dimensions of digital value co-creation in cultural institutions from a museum-visitor integrated perspective and the contribution of digital technologies in the value co-creation process. The study outlines a two-folded contribution that encompasses both an academic and a practitioner level. Indeed, it contributes to fulfilling the gap in cultural management literature about the convergence/divergence of service provider-user perspectives but it also provides cultural professionals with guidelines on how to evaluate the digital value co-creation process.

Keywords: co-creation, digital technologies, museum, value

Procedia PDF Downloads 130
531 Shift from Distance to In-Person Learning of Indigenous People’s Schools during the COVID 19 Pandemic: Gains and Challenges

Authors: May B. Eclar, Romeo M. Alip, Ailyn C. Eay, Jennifer M. Alip, Michelle A. Mejica, Eloy C.eclar

Abstract:

The COVID-19 pandemic has significantly changed the educational landscape of the Philippines. The groups affected by these changes are the poor and those living in the Geographically Isolated and Depressed Areas (GIDA), such as the Indigenous Peoples (IP). This was heavily experienced by the ten IP schools in Zambales, a province in the country. With this in mind, plus other factors relative to safety, the Schools Division of Zambales selected these ten schools to conduct the pilot implementation of in-person classes two (2) years after the country-wide school closures. This study aimed to explore the lived experiences of the school heads of the first ten Indigenous People’s (IP) schools that shifted from distance learning to limited in-person learning. These include the challenges met and the coping mechanism they set to overcome the challenges. The study is linked to experiential learning theory as it focuses on the idea that the best way to learn things is by having experiences). It made use of qualitative research, specifically phenomenology. All the ten school heads from the IP schools were chosen as participants in the study. Afterward, participants underwent semi-structured interviews, both individual and focus group discussions, for triangulation. Data were analyzed through thematic analysis. As a result, the study found that most IP schools did not struggle to convince parents to send their children back to school as they downplay the pandemic threat due to their geographical location. The parents struggled the most during modular learning since many of them are either illiterate, too old to teach their children, busy with their lands, or have too many children to teach. Moreover, there is a meager vaccination rate in the ten barangays where the schools are located because of local beliefs. In terms of financial needs, school heads did not find it difficult even though funding is needed to adjust the schools to the new normal because of the financial support coming from the central office. Technical assistance was also provided to the schools by division personnel. Teachers also welcomed the idea of shifting back to in-person classes, and minor challenges were met but were solved immediately through various mechanisms. Learning losses were evident since most learners struggled with essential reading, writing, and counting skills. Although the community has positively received the conduct of in-person classes, the challenges these IP schools have been experiencing pre-pandemic were also exacerbated due to the school closures. It is therefore recommended that constant monitoring and provision of support must continue to solve other challenges the ten IP schools are still experiencing due to in-person classes

Keywords: In-person learning, indigenous peoples, phenomenology, philippines

Procedia PDF Downloads 97
530 The Advancement of Smart Cushion Product and System Design Enhancing Public Health and Well-Being at Workplace

Authors: Dosun Shin, Assegid Kidane, Pavan Turaga

Abstract:

According to the National Institute of Health, living a sedentary lifestyle leads to a number of health issues, including increased risk of cardiovascular dis-ease, type 2 diabetes, obesity, and certain types of cancers. This project brings together experts in multiple disciplines to bring product design, sensor design, algorithms, and health intervention studies to develop a product and system that helps reduce the amount of time sitting at the workplace. This paper illustrates ongoing improvements to prototypes the research team developed in initial research; including working prototypes with a software application, which were developed and demonstrated for users. Additional modifications were made to improve functionality, aesthetics, and ease of use, which will be discussed in this paper. Extending on the foundations created in the initial phase, our approach sought to further improve the product by conducting additional human factor research, studying deficiencies in competitive products, testing various materials/forms, developing working prototypes, and obtaining feedback from additional potential users. The solution consisted of an aesthetically pleasing seat cover cushion that easily attaches to common office chairs found in most workplaces, ensuring a wide variety of people can use the product. The product discreetly contains sensors that track when the user sits on their chair, sending information to a phone app that triggers reminders for users to stand up and move around after sitting for a set amount of time. This paper also presents the analyzed typical office aesthetics and selected materials, colors, and forms that complimented the working environment. Comfort and ease of use remained a high priority as the design team sought to provide a product and system that integrated into the workplace. As the research team continues to test, improve, and implement this solution for the sedentary workplace, the team seeks to create a viable product that acts as an impetus for a more active workday and lifestyle, further decreasing the proliferation of chronic disease and health issues for sedentary working people. This paper illustrates in detail the processes of engineering, product design, methodology, and testing results.

Keywords: anti-sedentary work behavior, new product development, sensor design, health intervention studies

Procedia PDF Downloads 139
529 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 155