Search results for: small and middle enterprises
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6759

Search results for: small and middle enterprises

909 Catastrophic Health Expenditures: Evaluating the Effectiveness of Nepal's National Health Insurance Program Using Propensity Score Matching and Doubly Robust Methodology

Authors: Simrin Kafle, Ulrika Enemark

Abstract:

Catastrophic health expenditure (CHE) is a critical issue in low- and middle-income countries like Nepal, exacerbating financial hardship among vulnerable households. This study assesses the effectiveness of Nepal’s National Health Insurance Program (NHIP), launched in 2015, to reduce out-of-pocket (OOP) healthcare costs and mitigate CHE. Conducted in Pokhara Metropolitan City, the study used an analytical cross-sectional design, sampling 1276 households through a two-stage random sampling method. Data was collected via face-to-face interviews between May and October 2023. The analysis was conducted using SPSS version 29, incorporating propensity score matching to minimize biases and create comparable groups of enrolled and non-enrolled households in the NHIP. PSM helped reduce confounding effects by matching households with similar baseline characteristics. Additionally, a doubly robust methodology was employed, combining propensity score adjustment with regression modeling to enhance the reliability of the results. This comprehensive approach ensured a more accurate estimation of the impact of NHIP enrollment on CHE. Among the 1276 samples, 534 households (41.8%) were enrolled in NHIP. Of them, 84.3% of households renewed their insurance card, though some cited long waiting times, lack of medications, and complex procedures as barriers to renewal. Approximately 57.3% of households reported known diseases before enrollment, with 49.8% attending routine health check-ups in the past year. The primary motivation for enrollment was encouragement from insurance employees (50.2%). The data indicates that 12.5% of enrolled households experienced CHE versus 7.5% among non-enrolled. Enrollment into NHIP does not contribute to lower CHE (AOR: 1.98, 95% CI: 1.21-3.24). Key factors associated with increased CHE risk were presence of non-communicable diseases (NCDs) (AOR: 3.94, 95% CI: 2.10-7.39), acute illnesses/injuries (AOR: 6.70, 95% CI: 3.97-11.30), larger household size (AOR: 3.09, 95% CI: 1.81-5.28), and households below the poverty line (AOR: 5.82, 95% CI: 3.05-11.09). Other factors such as gender, education level, caste/ethnicity, presence of elderly members, and under-five children also showed varying associations with CHE, though not all were statistically significant. The study concludes that enrollment in the NHIP does not significantly reduce the risk of CHE. The reason for this could be inadequate coverage, where high-cost medicines, treatments, and transportation costs are not fully included in the insurance package, leading to significant out-of-pocket expenses. We also considered the long waiting time, lack of medicines, and complex procedures for the utilization of NHIP benefits, which might result in the underuse of covered services. Finally, gaps in enrollment and retention might leave certain households vulnerable to CHE despite the existence of NHIP. Key factors contributing to increased CHE include NCDs, acute illnesses, larger household sizes, and poverty. To improve the program’s effectiveness, it is recommended that NHIP benefits and coverage be expanded to better protect against high healthcare costs. Additionally, simplifying the renewal process, addressing long waiting times, and enhancing the availability of services could improve member satisfaction and retention. Targeted financial protection measures should be implemented for high-risk groups, and efforts should be made to increase awareness and encourage routine health check-ups to prevent severe health issues that contribute to CHE.

Keywords: catastrophic health expenditure, effectiveness, national health insurance program, Nepal

Procedia PDF Downloads 25
908 Broadband Ultrasonic and Rheological Characterization of Liquids Using Longitudinal Waves

Authors: M. Abderrahmane Mograne, Didier Laux, Jean-Yves Ferrandis

Abstract:

Rheological characterizations of complex liquids like polymer solutions present an important scientific interest for a lot of researchers in many fields as biology, food industry, chemistry. In order to establish master curves (elastic moduli vs frequency) which can give information about microstructure, classical rheometers or viscometers (such as Couette systems) are used. For broadband characterization of the sample, temperature is modified in a very large range leading to equivalent frequency modifications applying the Time Temperature Superposition principle. For many liquids undergoing phase transitions, this approach is not applicable. That is the reason, why the development of broadband spectroscopic methods around room temperature becomes a major concern. In literature many solutions have been proposed but, to our knowledge, there is no experimental bench giving the whole rheological characterization for frequencies about a few Hz (Hertz) to many MHz (Mega Hertz). Consequently, our goal is to investigate in a nondestructive way in very broadband frequency (A few Hz – Hundreds of MHz) rheological properties using longitudinal ultrasonic waves (L waves), a unique experimental bench and a specific container for the liquid: a test tube. More specifically, we aim to estimate the three viscosities (longitudinal, shear and bulk) and the complex elastic moduli (M*, G* and K*) respectively longitudinal, shear and bulk moduli. We have decided to use only L waves conditioned in two ways: bulk L wave in the liquid or guided L waves in the tube test walls. In this paper, we will present first results for very low frequencies using the ultrasonic tracking of a falling ball in the test tube. This will lead to the estimation of shear viscosity from a few mPa.s to a few Pa.s (Pascal second). Corrections due to the small dimensions of the tube will be applied and discussed regarding the size of the falling ball. Then the use of bulk L wave’s propagation in the liquid and the development of a specific signal processing in order to assess longitudinal velocity and attenuation will conduct to the longitudinal viscosity evaluation in the MHz frequency range. At last, the first results concerning the propagation, the generation and the processing of guided compressional waves in the test tube walls will be discussed. All these approaches and results will be compared to standard methods available and already validated in our lab.

Keywords: nondestructive measurement for liquid, piezoelectric transducer, ultrasonic longitudinal waves, viscosities

Procedia PDF Downloads 265
907 Relationship between Readability of Paper-Based Braille and Character Spacing

Authors: T. Nishimura, K. Doi, H. Fujimoto, T. Wada

Abstract:

The Number of people with acquired visual impairments has increased in recent years. In specialized courses at schools for the blind and in Braille lessons offered by social welfare organizations, many people with acquired visual impairments cannot learn to read adequately Braille. One of the reasons is that the common Braille patterns for people visual impairments who already has mature Braille reading skill being difficult to read for Braille reading beginners. In addition, there is the scanty knowledge of Braille book manufacturing companies regarding what Braille patterns would be easy to read for beginners. Therefore, it is required to investigate a suitable Braille patterns would be easy to read for beginners. In order to obtain knowledge regarding suitable Braille patterns for beginners, this study aimed to elucidate the relationship between readability of paper-based Braille and its patterns. This study focused on character spacing, which readily affects Braille reading ability, to determine a suitable character spacing ratio (ratio of character spacing to dot spacing) for beginners. Specifically, considering beginners with acquired visual impairments who are unfamiliar with reading Braille, we quantitatively evaluated the effect of character spacing ratio on Braille readability through an evaluation experiment using sighted subjects with no experience of reading Braille. In this experiment, ten sighted adults took the blindfold were asked to read test piece (three Braille characters). Braille used as test piece was composed of five dots. They were asked to touch the Braille by sliding their forefinger on the test piece immediately after the test examiner gave a signal to start the experiment. Then, they were required to release their forefinger from the test piece when they perceived the Braille characters. Seven conditions depended on character spacing ratio was held (i.e., 1.2, 1.4, 1.5, 1.6, 1.8, 2.0, 2.2 [mm]), and the other four depended on the dot spacing (i.e., 2.0, 2.5, 3.0, 3.5 [mm]). Ten trials were conducted for each conditions. The test pieces are created using by NISE Graphic could print Braille adjusted arbitrary value of character spacing and dot spacing with high accuracy. We adopted the evaluation indices for correct rate, reading time, and subjective readability to investigate how the character spacing ratio affects Braille readability. The results showed that Braille reading beginners could read Braille accurately and quickly, when character spacing ratio is more than 1.8 and dot spacing is more than 3.0 mm. Furthermore, it is difficult to read Braille accurately and quickly for beginners, when both character spacing and dot spacing are small. For this study, suitable character spacing ratio to make reading easy for Braille beginners is revealed.

Keywords: Braille, character spacing, people with visual impairments, readability

Procedia PDF Downloads 285
906 Advancing Entrepreneurial Knowledge Through Re-Engineering Social Studies Education

Authors: Chukwuka Justus Iwegbu, Monye Christopher Prayer

Abstract:

Propeller aircraft engines, and more generally engines with a large rotating part (turboprops, high bypass ratio turbojets, etc.) are widely used in the industry and are subject to numerous developments in order to reduce their fuel consumption. In this context, unconventional architectures such as open rotors or distributed propulsion appear, and it is necessary to consider the influence of these systems on the aircraft's stability in flight. Indeed, the tendency to lengthen the blades and wings on which these propulsion devices are fixed increases their flexibility and accentuates the risk of whirl flutter. This phenomenon of aeroelastic instability is due to the precession movement of the axis of rotation of the propeller, which changes the angle of attack of the flow on the blades and creates unsteady aerodynamic forces and moments that can amplify the motion and make it unstable. The whirl flutter instability can ultimately lead to the destruction of the engine. We note the existence of a critical speed of the incident flow. If the flow velocity is lower than this value, the motion is damped and the system is stable, whereas beyond this value, the flow provides energy to the system (negative damping) and the motion becomes unstable. A simple model of whirl flutter is based on the work of Houbolt & Reed who proposed an analytical expression of the aerodynamic load on a rigid blade propeller whose axis orientation suffers small perturbations. Their work considered a propeller subjected to pitch and yaw movements, a flow undisturbed by the blades and a propeller not generating any thrust in the absence of precession. The unsteady aerodynamic forces were then obtained using the thin airfoil theory and the strip theory. In the present study, the unsteady aerodynamic loads are expressed for a general movement of the propeller (not only pitch and yaw). The acceleration and rotation of the flow by the propeller are modeled using a Blade Element Momentum Theory (BEMT) approach, which also enable to take into account the thrust generated by the blades. It appears that the thrust has a stabilizing effect. The aerodynamic model is further developed using Theodorsen theory. A reduced order model of the aerodynamic load is finally constructed in order to perform linear stability analysis.

Keywords: advancing, entrepreneurial, knowledge, industralization

Procedia PDF Downloads 96
905 Evaluation of Main Factors Affecting the Choice of a Freight Forwarder: A Sri Lankan Exporter’s Perspective

Authors: Ishani Maheshika

Abstract:

The intermediary role performed by freight forwarders in exportation has become significant in fulfilling businesses’ supply chain needs in this dynamic world. Since the success of exporter’s business is at present, highly reliant on supply chain optimization, cost efficiency, profitability, consistent service and responsiveness, the decision of selecting the most beneficial freight forwarder has become crucial for exporters. Although there are similar foreign researches, prior researches covering Sri Lankan setting are not in existence. Moreover, results vary with time, nature of industry and business environment factors. Therefore, a study from the perspective of Sri Lankan exporters was identified as a requisite to be researched. In order to identify and prioritize key factors which have affected the exporter’s decision in selecting freight forwarders in Sri Lankan context, Sri Lankan export industry was stratified into 22 sectors based on commodity using stratified sampling technique. One exporter from each sector was then selected using judgmental sampling to have a sample of 22. Factors which were identified through a pilot survey, was organized under 6 main criteria. A questionnaire was basically developed as pairwise comparisons using 9-point semantic differential scale and comparisons were done within main criteria and subcriteria. After a pre-testing, interviews and e-mail questionnaire survey were conducted. Data were analyzed using Analytic Hierarchy Process to determine priority vectors of criteria. Customer service was found to be the most important main criterion for Sri Lankan exporters. It was followed by reliability and operational efficiency respectively. The criterion of the least importance is company background and reputation. Whereas small sized exporters pay more attention to rate, reliability is the major concern among medium and large scale exporters. Irrespective of seniority of the exporter, reliability is given the prominence. Responsiveness is the most important sub criterion among Sri Lankan exporters. Consistency of judgments with respect to main criteria was verified through consistency ratio, which was less than 10%. Being more competitive, freight forwarders should come up with customized marketing strategies based on each target group’s requirements and expectations in offering services to retain existing exporters and attract new exporters.

Keywords: analytic hierarchy process, freight forwarders, main criteria, Sri Lankan exporters, subcriteria

Procedia PDF Downloads 406
904 A Network Economic Analysis of Friendship, Cultural Activity, and Homophily

Authors: Siming Xie

Abstract:

In social networks, the term homophily refers to the tendency of agents with similar characteristics to link with one another and is so robustly observed across many contexts and dimensions. The starting point of my research is the observation that the “type” of agents is not a single exogenous variable. Agents, despite their differences in race, religion, and other hard to alter characteristics, may share interests and engage in activities that cut across those predetermined lines. This research aims to capture the interactions of homophily effects in a model where agents have two-dimension characteristics (i.e., race and personal hobbies such as basketball, which one either likes or dislikes) and with biases in meeting opportunities and in favor of same-type friendships. A novel feature of my model is providing a matching process with biased meeting probability on different dimensions, which could help to understand the structuring process in multidimensional networks without missing layer interdependencies. The main contribution of this study is providing a welfare based matching process for agents with multi-dimensional characteristics. In particular, this research shows that the biases in meeting opportunities on one dimension would lead to the emergence of homophily on the other dimension. The objective of this research is to determine the pattern of homophily in network formations, which will shed light on our understanding of segregation and its remedies. By constructing a two-dimension matching process, this study explores a method to describe agents’ homophilous behavior in a social network with multidimension and construct a game in which the minorities and majorities play different strategies in a society. It also shows that the optimal strategy is determined by the relative group size, where society would suffer more from social segregation if the two racial groups have a similar size. The research also has political implications—cultivating the same characteristics among agents helps diminishing social segregation, but only if the minority group is small enough. This research includes both theoretical models and empirical analysis. Providing the friendship formation model, the author first uses MATLAB to perform iteration calculations, then derives corresponding mathematical proof on previous results, and last shows that the model is consistent with empirical evidence from high school friendships. The anonymous data comes from The National Longitudinal Study of Adolescent Health (Add Health).

Keywords: homophily, multidimension, social networks, friendships

Procedia PDF Downloads 170
903 Children Asthma; The Role of Molecular Pathways and Novel Saliva Biomarkers Assay

Authors: Seyedahmad Hosseini, Mohammadjavad Sotoudeheian

Abstract:

Introduction: Allergic asthma is a heterogeneous immuno-inflammatory disease based on Th-2-mediated inflammation. Histopathologic abnormalities of the airways characteristic of asthma include epithelial damage and subepithelial collagen deposition. Objectives: Human bronchial epithelial cell genome expression of TNF‑α, IL‑6, ICAM‑1, VCAM‑1, nuclear factor (NF)‑κB signaling pathways up-regulate during inflammatory cascades. Moreover, immunofluorescence assays confirmed the nuclear translocation of NF‑κB p65 during inflammatory responses. An absolute LDH leakage assays suggestedLPS-inducedcells injury, and the associated mechanisms are co-incident events. LPS-induced phosphorylation of ERKand JNK causes inflammation in epithelial cells through inhibition of ERK and JNK activation and NF-κB signaling pathway. Furthermore, the inhibition of NF-κB mRNA expression and the nuclear translocation of NF-κB lead to anti-inflammatory events. Likewise, activation of SUMF2 which inhibits IL-13 and reduces Th2-cytokines, NF-κB, and IgE levels to ameliorate asthma. On the other hand, TNFα-induced mucus production reduced NF-κB activation through inhibition of the activation status of Rac1 and IκBα phosphorylation. In addition, bradykinin B2 receptor (B2R), which mediates airway remodeling, regulates through NF-κB. Bronchial B2R expression is constitutively elevated in allergic asthma. In addition, certain NF-κB -dependent chemokines function to recruit eosinophils in the airway. Besides, bromodomain containing 4 (BRD4) plays a significant role in mediating innate immune response in human small airway epithelial cells as well as transglutaminase 2 (TG2), which is detectable in saliva. So, the guanine nucleotide-binding regulatory protein α-subunit, Gα16, expresses a κB-driven luciferase reporter. This response was accompanied by phosphorylation of IκBα. Furthermore, expression of Gα16 in saliva markedly enhanced TNF-α-induced κB reporter activity. Methods: The applied method to form NF-κB activation is the electromobility shift assay (EMSA). Also, B2R-BRD4-TG2 complex detection by immunoassay method within saliva with EMSA of NF-κB activation may be a novel biomarker for asthma diagnosis and follow up. Conclusion: This concept introduces NF-κB signaling pathway as potential asthma biomarkers and promising targets for the development of new therapeutic strategies against asthma.

Keywords: NF-κB, asthma, saliva, T-helper

Procedia PDF Downloads 97
902 A Radiofrequency Based Navigation Method for Cooperative Robotic Communities in Surface Exploration Missions

Authors: Francisco J. García-de-Quirós, Gianmarco Radice

Abstract:

When considering small robots working in a cooperative community for Moon surface exploration, navigation and inter-nodes communication aspects become a critical issue for the mission success. For this approach to succeed, it is necessary however to deploy the required infrastructure for the robotic community to achieve efficient self-localization as well as relative positioning and communications between nodes. In this paper, an exploration mission concept in which two cooperative robotic systems co-exist is presented. This paradigm hinges on a community of reference agents that provide support in terms of communication and navigation to a second agent community tasked with exploration goals. The work focuses on the role of the agent community in charge of the overall support and, more specifically, will focus on the positioning and navigation methods implemented in RF microwave bands, which are combined with the communication services. An analysis of the different methods for range and position calculation are presented, as well as the main limiting factors for precision and resolution, such as phase and frequency noise in RF reference carriers and drift mechanisms such as thermal drift and random walk. The effects of carrier frequency instability due to phase noise are categorized in different contributing bands, and the impact of these spectrum regions are considered both in terms of the absolute position and the relative speed. A mission scenario is finally proposed, and key metrics in terms of mass and power consumption for the required payload hardware are also assessed. For this purpose, an application case involving an RF communication network in UHF Band is described, in coexistence with a communications network used for the single agents to communicate within the both the exploring agents as well as the community and with the mission support agents. The proposed approach implements a substantial improvement in planetary navigation since it provides self-localization capabilities for robotic agents characterized by very low mass, volume and power budgets, thus enabling precise navigation capabilities to agents of reduced dimensions. Furthermore, a common and shared localization radiofrequency infrastructure enables new interaction mechanisms such as spatial arrangement of agents over the area of interest for distributed sensing.

Keywords: cooperative robotics, localization, robot navigation, surface exploration

Procedia PDF Downloads 294
901 A Small-Scale Survey on Risk Factors of Musculoskeletal Disorders in Workers of Logistics Companies in Cyprus and on the Early Adoption of Industrial Exoskeletons as Mitigation Measure

Authors: Kyriacos Clerides, Panagiotis Herodotou, Constantina Polycarpou, Evagoras Xydas

Abstract:

Background: Musculoskeletal disorders (MSDs) in the workplace is a very common problem in Europe which are caused by multiple risk factors. In recent years, wearable devices and exoskeletons for the workplace have been trying to address the various risk factors that are associated with strenuous tasks in the workplace. The logistics sector is a huge sector that includes warehousing, storage, and transportation. However, the task associated with logistics is not well-studied in terms of MSDs risk. This study was aimed at looking into the MSDs affecting workers of logistics companies. It compares the prevalence of MSDs among workers and evaluates multiple risk factors that contribute to the development of MSDs. Moreover, this study seeks to obtain user feedback on the adoption of exoskeletons in such a work environment. Materials and Methods: The study was conducted among workers in logistics companies in Nicosia, Cyprus, from July to September 2022. A set of standardized questionnaires was used for collecting different types of data. Results: A high proportion of logistics professionals reported MSDs in one or more other body regions, the lower back being the most commonly affected area. Working in the same position for long periods, working in awkward postures, and handling an excessive load, were found to be the most commonly reported job risk factor that contributed to the development of MSDs, in this study. A significant number of participants consider the back region as the most to be benefited from a wearable exoskeleton device. Half of the participants would like to have at least a 50% reduction in their daily effort. The most important characteristics for the adoption of exoskeleton devices were found to be how comfortable the device is and its weight. Conclusion: Lower back and posture were the highest risk factors among all logistics professionals assessed in this study. A larger scale study using quantitative analytical tools may give a more accurate estimate of MSDs, which would pave the way for making more precise recommendations to eliminate the risk factors and thereby prevent MSDs. A follow-up study using exoskeletons in the workplace should be done to assess whether they assist in MSD prevention.

Keywords: musculoskeletal disorders, occupational health, safety, occupational risk, logistic companies, workers, Cyprus, industrial exoskeletons, wearable devices

Procedia PDF Downloads 107
900 Cover Layer Evaluation in Soil Organic Matter of Mixing and Compressed Unsaturated

Authors: Nayara Torres B. Acioli, José Fernando T. Jucá

Abstract:

The uncontrolled emission of gases in urban residues' embankment located near urban areas is a social and environmental problem, common in Brazilian cities. Several environmental impacts in the local and global scope may be generated by atmospheric air contamination by the biogas resulted from the decomposition of solid urban materials. In Brazil, the cities of small size figure mostly with 90% of all cities, with the population smaller than 50,000 inhabitants, according to the 2011 IBGE' census, most of the landfill covering layer is composed of clayey, pure soil. The embankments undertaken with pure soil may reach up to 60% of retention of methane, for the other 40% it may be dispersed into the atmosphere. In face of this figures the oxidative covering layer is granted some space of study, envisaging to reduce this perceptual available in the atmosphere, releasing, in spite of methane, carbonic gas which is almost 20 times as less polluting than Methane. This paper exposes the results of studies on the characteristics of the soil used for the oxidative coverage layer of the experimental embankment of Solid Urban Residues (SUR), built in Muribeca-PE, Brazil, supported of the Group of Solid Residues (GSR), located at Federal University of Pernambuco, through laboratory vacuum experiments (determining the characteristics curve), granularity, and permeability, that in soil with saturation over 85% offers dramatic drops in the test of permeability to the air, by little increments of water, based in the existing Brazilian norm for this procedure. The suction was studied, as in the other tests, from the division of prospection of an oxidative coverage layer of 60cm, in the upper half (0.1 m to 0.3 m) and lower half (0.4 m to 0.6 m). Therefore, the consequences to be presented from the lixiviation of the fine materials after 5 years of finalization of the embankment, what made its permeability increase. Concerning its humidity, it is most retained in the upper part, that comprises the compound, with a difference in the order of 8 percent the superior half to inferior half, retaining the least suction from the surface. These results reveal the efficiency of the oxidative coverage layer in retaining the rain water, it has a lower cost when compared to the other types of layer, offering larger availability of this layer as an alternative for a solution for the appropriate disposal of residues.

Keywords: oxidative coverage layer, permeability, suction, saturation

Procedia PDF Downloads 289
899 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India

Authors: Disha Bhanot, Vinish Kathuria

Abstract:

This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.

Keywords: distress sale, horticulture, income loss, India, price uncertainity

Procedia PDF Downloads 243
898 Use of Pig as an Animal Model for Assessing the Differential MicroRNA Profiling in Kidney after Aristolochic Acid Intoxication

Authors: Daniela E. Marin, Cornelia Braicu, Gina C. Pistol, Roxana Cojocneanu-Petric, Ioana Berindan Neagoe, Mihail A. Gras, Ionelia Taranu

Abstract:

Aristolochic acid (AA) is a carcinogenic, mutagenic, and nephrotoxic compound commonly found in the Aristolochiaceae family of plants. AA is frequently associated with urothelial carcinoma of the upper urinary tract in human and animals and is considered as being responsible for Balkan Endemic Nephropathy. The pig provides a good animal model because the porcine urological system is very similar to that of humans, both in aspects of physiology and anatomy. MicroRNA (miRNA) are small non-coding RNAs that have an impact on a wide range of biological processes by regulating gene expression at post-transcriptional level. The objective of this study was to analyze the miRNA profiling in the kidneys of AA intoxicated swine. For this purpose, ten TOPIGS-40 crossbred weaned piglets, 4-week-old, male and females with an initial average body weight of 9.83 ± 0.5 kg were studied for 28 days. They were given ad libitum access to water and feed and randomly allotted to one of the following groups: control group (C) or aristolochic acid group (AA). They were fed a maize-soybean-meal-based diet contaminated or not with 0.25mgAA/kg. To profile miRNA in the kidneys of pigs, microarrays and bioinformatics approaches were applied to analyze the miRNA in the kidney of control and AA intoxicated pigs. After normalization, our results have shown that a total of 5 known miRNAs and 4 novel miRNAs had different profiling in the kidney of intoxicated animals versus control ones. Expression of miR-32-5p, miR-497-5p, miR-423-3p, miR-218-5p, miR-128-3p were up-regulated by 0.25mgAA/kg feed, while the expression of miR-9793-5p, miR-9835-3p, miR-9840-3p, miR-4334-5p was down-regulated. The microRNA profiling in kidney of intoxicated animals was associated with modified expression of target genes as: RICTOR, LASP1, SFRP2, DKK2, BMI1, RAF1, IGF1R, MAP2K1, WEE1, HDGF, BCL2, EIF4E etc, involved in cell division cycle, apoptosis, cell differentiation and cell migration, cell signaling, cancer etc. In conclusion, this study provides new data concerning the microRNA profiling in kidney after aristolochic acid intoxications with important implications for human and animal health.

Keywords: aristolochic acid, kidney, microRNA, swine

Procedia PDF Downloads 283
897 Seismic Assessment of a Pre-Cast Recycled Concrete Block Arch System

Authors: Amaia Martinez Martinez, Martin Turek, Carlos Ventura, Jay Drew

Abstract:

This study aims to assess the seismic performance of arch and dome structural systems made from easy to assemble precast blocks of recycled concrete. These systems have been developed by Lock Block Ltd. Company from Vancouver, Canada, as an extension of their currently used retaining wall system. The characterization of the seismic behavior of these structures is performed by a combination of experimental static and dynamic testing, and analytical modeling. For the experimental testing, several tilt tests, as well as a program of shake table testing were undertaken using small scale arch models. A suite of earthquakes with different characteristics from important past events are chosen and scaled properly for the dynamic testing. Shake table testing applying the ground motions in just one direction (in the weak direction of the arch) and in the three directions were conducted and compared. The models were tested with increasing intensity until collapse occurred; which determines the failure level for each earthquake. Since the failure intensity varied with type of earthquake, a sensitivity analysis of the different parameters was performed, being impulses the dominant factor. For all cases, the arches exhibited the typical four-hinge failure mechanism, which was also shown in the analytical model. Experimental testing was also performed reinforcing the arches using a steel band over the structures anchored at both ends of the arch. The models were tested with different pretension levels. The bands were instrumented with strain gauges to measure the force produced by the shaking. These forces were used to develop engineering guidelines for the design of the reinforcement needed for these systems. In addition, an analytical discrete element model was created using 3DEC software. The blocks were designed as rigid blocks, assigning all the properties to the joints including also the contribution of the interlocking shear key between blocks. The model is calibrated to the experimental static tests and validated with the obtained results from the dynamic tests. Then the model can be used to scale up the results to the full scale structure and expanding it to different configurations and boundary conditions.

Keywords: arch, discrete element model, seismic assessment, shake-table testing

Procedia PDF Downloads 206
896 Aquatic Therapy Improving Balance Function of Individuals with Stroke: A Systematic Review with Meta-Analysis

Authors: Wei-Po Wu, Wen-Yu Liu, Wei−Ting Lin, Hen-Yu Lien

Abstract:

Introduction: Improving balance function for individuals after stroke is a crucial target in physiotherapy. Aquatic therapy which challenges individual’s postural control in an unstable fluid environment may be beneficial in enhancing balance functions. The purposes of the systematic review with meta-analyses were to validate the effects of aquatic therapy in improving balance functions for individuals with strokes in contrast to conventional physiotherapy. Method: Available studies were explored from three electronic databases: PubMed, Scopus, and Web of Science. During literature search, the published date of studies was not limited. The study design of the included studies should be randomized controlled trials (RCTs) and the studies should contain at least one outcome measurement of balance function. The PEDro scale was adopted to assess the quality of included studies, while the 'Oxford Centre for Evidence-Based Medicine 2011 Levels of Evidence' was used to evaluate the level of evidence. After the data extraction, studies with same outcome measures were pooled together for meta-analysis. Result: Ten studies with 282 participants were included in analyses. The research qualities of the studies were ranged from fair to good (4 to 8 points). Levels of evidence of the included studies were graded as level 2 and 3. Finally, scores of Berg Balance Scale (BBS), Eye closed force plate center of pressure velocity (anterior-posterior, medial-lateral axis) and Timed up and Go test were pooled and analyzed separately. The pooled results shown improvement in balance function (BBS mean difference (MD): 1.39 points; 95% confidence interval (CI): 0.05-2.29; p=0.002) (Eye closed force plate center of pressure velocity (anterior-posterior axis) MD: 1.39 mm/s; 95% confidence interval (CI): 0.93-1.86; p<0.001) (Eye closed force plate center of pressure velocity (medial-lateral) MD: 1.48 mm/s; 95% confidence interval (CI): 0.15-2.82; p=0.03) and mobility (MD: 0.9 seconds; 95% CI: 0.07-1.73; p=0.03) of stroke individuals after aquatic therapy compared to conventional therapy. Although there were significant differences between two treatment groups, the differences in improvement were relatively small. Conclusion: The aquatic therapy improved general balance function and mobility in the individuals with stroke better than conventional physiotherapy.

Keywords: aquatic therapy, balance function, meta-analysis, stroke, systematic review

Procedia PDF Downloads 201
895 Characterization, Replication and Testing of Designed Micro-Textures, Inspired by the Brill Fish, Scophthalmus rhombus, for the Development of Bioinspired Antifouling Materials

Authors: Chloe Richards, Adrian Delgado Ollero, Yan Delaure, Fiona Regan

Abstract:

Growing concern about the natural environment has accelerated the search for non-toxic, but at the same time, economically reasonable, antifouling materials. Bioinspired surfaces, due to their nano and micro topographical antifouling capabilities, provide a hopeful approach to the design of novel antifouling surfaces. Biological organisms are known to have highly evolved and complex topographies, demonstrating antifouling potential, i.e. shark skin. Previous studies have examined the antifouling ability of topographic patterns, textures and roughness scales found on natural organisms. One of the mechanisms used to explain the adhesion of cells to a substrate is called attachment point theory. Here, the fouling organism experiences increased attachment where there are multiple attachment points and reduced attachment, where the number of attachment points are decreased. In this study, an attempt to characterize the microtopography of the common brill fish, Scophthalmus rhombus, was undertaken. Scophthalmus rhombus is a small flatfish of the family Scophthalmidae, inhabiting regions from Norway to the Mediterranean and the Black Sea. They reside in shallow sandy and muddy coastal areas at depths of around 70 – 80 meters. Six engineered surfaces (inspired by the Brill fish scale) produced by a 2-photon polymerization (2PP) process were evaluated for their potential as an antifouling solution for incorporation onto tidal energy blades. The micro-textures were analyzed for their AF potential under both static and dynamic laboratory conditions using two laboratory grown diatom species, Amphora coffeaeformis and Nitzschia ovalis. The incorporation of a surface topography was observed to cause a disruption in the growth of A. coffeaeformis and N. ovalis cells on the surface in comparison to control surfaces. This work has demonstrated the importance of understanding cell-surface interaction, in particular, topography for the design of novel antifouling technology. The study concluded that biofouling can be controlled by physical modification, and has contributed significant knowledge to the use of a successful novel bioinspired AF technology, based on Brill, for the first time.

Keywords: attachment point theory, biofouling, Scophthalmus rhombus, topography

Procedia PDF Downloads 107
894 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 212
893 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 268
892 Controlling Shape and Position of Silicon Micro-nanorolls Fabricated using Fine Bubbles during Anodization

Authors: Yodai Ashikubo, Toshiaki Suzuki, Satoshi Kouya, Mitsuya Motohashi

Abstract:

Functional microstructures such as wires, fins, needles, and rolls are currently being applied to variety of high-performance devices. Under these conditions, a roll structure (silicon micro-nanoroll) was formed on the surface of the silicon substrate via fine bubbles during anodization using an extremely diluted hydrofluoric acid (HF + H₂O). The as-formed roll had a microscale length and width of approximately 1 µm. The number of rolls was 3-10 times and the thickness of the film forming the rolls was about 10 nm. Thus, it is promising for applications as a distinct device material. These rolls functioned as capsules and/or pipelines. To date, number of rolls and roll length have been controlled by anodization conditions. In general, controlling the position and roll winding state is required for device applications. However, it has not been discussed. Grooves formed on silicon surface before anodization might be useful control the bubbles. In this study, we investigated the effect of the grooves on the position and shape of the roll. The surfaces of the silicon wafers were anodized. The starting material was p-type (100) single-crystalline silicon wafers. The resistivity of the wafer is 5-20 ∙ cm. Grooves were formed on the surface of the substrate before anodization using sandpaper and diamond pen. The average width and depth of the grooves were approximately 1 µm and 0.1 µm, respectively. The HF concentration {HF/ (HF + C₂H5OH + H₂O)} was 0.001 % by volume. The C2H5OH concentration {C₂H5OH/ (HF + C₂H5OH + H₂O)} was 70 %. A vertical single-tank cell and Pt cathode were used for anodization. The silicon roll was observed by field-emission scanning electron microscopy (FE-SEM; JSM-7100, JEOL). The atomic bonding state of the rolls was evaluated using X-ray photoelectron spectroscopy (XPS; ESCA-3400, Shimadzu). For straight groove, the rolls were formed along the groove. This indicates that the orientation of the rolls can be controlled by the grooves. For lattice-like groove, the rolls formed inside the lattice and along the long sides. In other words, the aspect ratio of the lattice is very important for the roll formation. In addition, many rolls were formed and winding states were not uniform when the lattice size is too large. On the other hand, no rolls were formed for small lattice. These results indicate that there is the optimal size of lattice for roll formation. In the future, we are planning on formation of rolls using groove formed by lithography technique instead of sandpaper and the pen. Furthermore, the rolls included nanoparticles will be formed for nanodevices.

Keywords: silicon roll, anodization, fine bubble, microstructure

Procedia PDF Downloads 18
891 Water Dumpflood into Multiple Low-Pressure Gas Reservoirs

Authors: S. Lertsakulpasuk, S. Athichanagorn

Abstract:

As depletion-drive gas reservoirs are abandoned when there is insufficient production rate due to pressure depletion, waterflooding has been proposed to increase the reservoir pressure in order to prolong gas production. Due to high cost, water injection may not be economically feasible. Water dumpflood into gas reservoirs is a new promising approach to increase gas recovery by maintaining reservoir pressure with much cheaper costs than conventional waterflooding. Thus, a simulation study of water dumpflood into multiple nearly abandoned or already abandoned thin-bedded gas reservoirs commonly found in the Gulf of Thailand was conducted to demonstrate the advantage of the proposed method and to determine the most suitable operational parameters for reservoirs having different system parameters. A reservoir simulation model consisting of several thin-layered depletion-drive gas reservoirs and an overlying aquifer was constructed in order to investigate the performance of the proposed method. Two producers were initially used to produce gas from the reservoirs. One of them was later converted to a dumpflood well after gas production rate started to decline due to continuous reduction in reservoir pressure. The dumpflood well was used to flow water from the aquifer to increase pressure of the gas reservoir in order to drive gas towards producer. Two main operational parameters which are wellhead pressure of producer and the time to start water dumpflood were investigated to optimize gas recovery for various systems having different gas reservoir dip angles, well spacings, aquifer sizes, and aquifer depths. This simulation study found that water dumpflood can increase gas recovery up to 12% of OGIP depending on operational conditions and system parameters. For the systems having a large aquifer and large distance between wells, it is best to start water dumpflood when the gas rate is still high since the long distance between the gas producer and dumpflood well helps delay water breakthrough at producer. As long as there is no early water breakthrough, the earlier the energy is supplied to the gas reservoirs, the better the gas recovery. On the other hand, for the systems having a small or moderate aquifer size and short distance between the two wells, performing water dumpflood when the rate is close to the economic rate is better because water is more likely to cause an early breakthrough when the distance is short. Water dumpflood into multiple nearly-depleted or depleted gas reservoirs is a novel study. The idea of using water dumpflood to increase gas recovery has been mentioned in the literature but has never been investigated. This detailed study will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost and risk.

Keywords: dumpflood, increase gas recovery, low-pressure gas reservoir, multiple gas reservoirs

Procedia PDF Downloads 444
890 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational

Authors: Marco Sewald

Abstract:

Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.

Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating

Procedia PDF Downloads 221
889 Depictions of Human Cannibalism and the Challenge They Pose to the Understanding of Animal Rights

Authors: Desmond F. Bellamy

Abstract:

Discourses about animal rights usually assume an ontological abyss between human and animal. This supposition of non-animality allows us to utilise and exploit non-humans, particularly those with commercial value, with little regard for their rights or interests. We can and do confine them, inflict painful treatments such as castration and branding, and slaughter them at an age determined only by financial considerations. This paper explores the way images and texts depicting human cannibalism reflect this deprivation of rights back onto our species and examines how this offers new perspectives on our granting or withholding of rights to farmed animals. The animals we eat – sheep, pigs, cows, chickens and a small handful of other species – are during processing de-animalised, turned into commodities, and made unrecognisable as formerly living beings. To do the same to a human requires the cannibal to enact another step – humans must first be considered as animals before they can be commodified or de-animalised. Different iterations of cannibalism in a selection of fiction and non-fiction texts will be considered: survivalism (necessitated by catastrophe or dystopian social collapse), the primitive savage of colonial discourses, and the inhuman psychopath. Each type of cannibalism shows alternative ways humans can be animalised and thereby dispossessed of both their human and animal rights. Human rights, summarised in the UN Universal Declaration of Human Rights as ‘life, liberty, and security of person’ are stubbornly denied to many humans, and are refused to virtually all farmed non-humans. How might this paradigm be transformed by seeing the animal victim replaced by an animalised human? People are fascinated as well as repulsed by cannibalism, as demonstrated by the upsurge of films on the subject in the last few decades. Cannibalism is, at its most basic, about envisaging and treating humans as objects: meat. It is on the dinner plate that the abyss between human and ‘animal’ is most challenged. We grasp at a conscious level that we are a species of animal and may become, if in the wrong place (e.g., shark-infested water), ‘just food’. Culturally, however, strong traditions insist that humans are much more than ‘just meat’ and deserve a better fate than torment and death. The billions of animals on death row awaiting human consumption would ask the same if they could. Depictions of cannibalism demonstrate in graphic ways that humans are animals, made of meat and that we can also be butchered and eaten. These depictions of us as having the same fleshiness as non-human animals reminds us that they have the same capacities for pain and pleasure as we do. Depictions of cannibalism, therefore, unconsciously aid in deconstructing the human/animal binary and give a unique glimpse into the often unnoticed repudiation of animal rights.

Keywords: animal rights, cannibalism, human/animal binary, objectification

Procedia PDF Downloads 138
888 Analysis of Brownfield Soil Contamination Using Local Government Planning Data

Authors: Emma E. Hellawell, Susan J. Hughes

Abstract:

BBrownfield sites are currently being redeveloped for residential use. Information on soil contamination on these former industrial sites is collected as part of the planning process by the local government. This research project analyses this untapped resource of environmental data, using site investigation data submitted to a local Borough Council, in Surrey, UK. Over 150 site investigation reports were collected and interrogated to extract relevant information. This study involved three phases. Phase 1 was the development of a database for soil contamination information from local government reports. This database contained information on the source, history, and quality of the data together with the chemical information on the soil that was sampled. Phase 2 involved obtaining site investigation reports for development within the study area and extracting the required information for the database. Phase 3 was the data analysis and interpretation of key contaminants to evaluate typical levels of contaminants, their distribution within the study area, and relating these results to current guideline levels of risk for future site users. Preliminary results for a pilot study using a sample of the dataset have been obtained. This pilot study showed there is some inconsistency in the quality of the reports and measured data, and careful interpretation of the data is required. Analysis of the information has found high levels of lead in shallow soil samples, with mean and median levels exceeding the current guidance for residential use. The data also showed elevated (but below guidance) levels of potentially carcinogenic polyaromatic hydrocarbons. Of particular concern from the data was the high detection rate for asbestos fibers. These were found at low concentrations in 25% of the soil samples tested (however, the sample set was small). Contamination levels of the remaining chemicals tested were all below the guidance level for residential site use. These preliminary pilot study results will be expanded, and results for the whole local government area will be presented at the conference. The pilot study has demonstrated the potential for this extensive dataset to provide greater information on local contamination levels. This can help inform regulators and developers and lead to more targeted site investigations, improving risk assessments, and brownfield development.

Keywords: Brownfield development, contaminated land, local government planning data, site investigation

Procedia PDF Downloads 139
887 Visualization of Chinese Genealogies with Digital Technology: A Case of Genealogy of Wu Clan in the Village of Gaoqian

Authors: Huiling Feng, Jihong Liang, Xiaodong Gong, Yongjun Xu

Abstract:

Recording history is a tradition in ancient China. A record of a dynasty makes a dynastic history; a record of a locality makes a chorography, and a record of a clan makes a genealogy – the three combined together depicts a complete national history of China both macroscopically and microscopically, with genealogy serving as the foundation. Genealogy in ancient China traces back to a family tree or pedigrees in the early and medieval historical times. After Song Dynasty, the civilian society gradually emerged, and the Emperor had to allow people from the same clan to live together and hold the ancestor worship activities, thence compilation of genealogy became popular in the society. Since then, genealogies, regarded as important as ancestor and religious temples in a traditional villages even today, have played a primary role in identification of a clan and maintain local social order. Chinese genealogies are rich in their documentary materials. Take the Genealogy of Wu Clan in Gaoqian as an example. Gaoqian is a small village in Xianju County of Zhejiang Province. The Genealogy of Wu Clan in Gaoqian is composed of a whole set of materials from Foreword to Family Trees, Family Rules, Family Rituals, Family Graces and Glories, Ode to An ancestor’s Portrait, Manual for the Ancestor Temple, documents for great men in the clan, works written by learned men in the clan, the contracts concerning landed property, even notes on tombs and so on. Literally speaking, the genealogy, with detailed information from every aspect recorded in stylistic rules, is indeed the carrier of the entire culture of a clan. However, due to their scarcity in number and difficulties in reading, genealogies seldom fall into the horizons of common people. This paper, focusing on the case of the Genealogy of Wu Clan in the Village of Gaoqian, intends to reproduce a digital Genealogy by use of ICTs, through an in-depth interpretation of the literature and field investigation in Gaoqian Village. Based on this, the paper goes further to explore the general methods in transferring physical genealogies to digital ones and ways in visualizing the clanism culture embedded in the genealogies with a combination of digital technologies such as software in family trees, multimedia narratives, animation design, GIS application and e-book creators.

Keywords: clanism culture, multimedia narratives, genealogy of Wu Clan, GIS

Procedia PDF Downloads 222
886 In vitro and in vivo Infectivity of Coxiella burnetii Strains from French Livestock

Authors: Joulié Aurélien, Jourdain Elsa, Bailly Xavier, Gasqui Patrick, Yang Elise, Leblond Agnès, Rousset Elodie, Sidi-Boumedine Karim

Abstract:

Q fever is a worldwide zoonosis caused by the gram-negative obligate intracellular bacterium Coxiella burnetii. Following the recent outbreaks in the Netherlands, a hyper virulent clone was found to be the cause of severe human cases of Q fever. In livestock, Q fever clinical manifestations are mainly abortions. Although the abortion rates differ between ruminant species, C. burnetii’s virulence remains understudied, especially in enzootic areas. In this study, the infectious potential of three C. burnetii isolates collected from French farms of small ruminants were compared to the reference strain Nine Mile (in phase II and in an intermediate phase) using an in vivo (CD1 mice) model. Mice were challenged with 105 live bacteria discriminated by propidium monoazide-qPCR targeting the icd-gene. After footpad inoculation, spleen and popliteal lymph node were harvested at 10 days post-inoculation (p.i). The strain invasiveness in spleen and popliteal nodes was assessed by qPCR assays targeting the icd-gene. Preliminary results showed that the avirulent strains (in phase 2) failed to pass the popliteal barrier and then to colonize the spleen. This model allowed a significant differentiation between strain’s invasiveness on biological host and therefore identifying distinct virulence profiles. In view of these results, we plan to go further by testing fifteen additional C. burnetii isolates from French farms of sheep, goat and cattle by using the above-mentioned in vivo model. All 15 strains display distant MLVA (multiple-locus variable-number of tandem repeat analysis) genotypic profiles. Five of the fifteen isolates will bee also tested in vitro on ovine and bovine macrophage cells. Cells and supernatants will be harvested at day1, day2, day3 and day6 p.i to assess in vitro multiplication kinetics of strains. In conclusion, our findings might help the implementation of surveillance of virulent strains and ultimately allow adapting prophylaxis measures in livestock farms.

Keywords: Q fever, invasiveness, ruminant, virulence

Procedia PDF Downloads 361
885 For a Poetic Clinic: Experimentations at Risk on the Images in Performances

Authors: Juliana Bom-Tempo

Abstract:

The proposed composition occurs between images, performances, clinics and philosophies. For this enterprise we depart for what is not known beforehand, so with a question as a compass: "would it be in the creation, production and implementation of images in a performance a 'when' for the event of a poetic clinic?” In light of this, there are, in order to think a 'when' of the event of a poetic clinic, images in performances created, produced and executed in partnerships with the author of this text. Faced with this composition, we built four indicators to find spatiotemporal coordinates that would spot that "when", namely: risk zones; the mobilizations of the signs; the figuring of the flesh and an education of the affections. We dealt with the images in performances; Crútero; Flesh; Karyogamy and the risk of abortion; Egg white; Egg-mouth; Islands, threads, words ... germs; Egg-Mouth-Debris, taken as case studies, by engendering risks areas to promote individuations, which never actualize thoroughly, thus always something of pre-individual and also individuating a environment; by mobilizing the signs territorialized by the ordinary, causing them to vary the language and the words of order dictated by the everyday in other compositions of sense, other machinations; by generating a figure of flesh, disarranging the bodies, isolating them in the production of a ground force that causes the body to leak out and undo the functionalities of the organs; and, finally, by producing an education of affections, by placing the perceptions in becoming and disconnecting the visible in the production of small deserts that call for the creation of a people yet to come. The performance is processed as a problematizing of the images fixed by the ordinary, producing gestures that precipitate the individuation of images in performance, strange to the configurations that gather bodies and spaces in what we call common. Lawrence proposes to think of "people" who continually use umbrellas to protect themselves from chaos. These have the function of wrapping up the chaos in visions that create houses, forms and stabilities; they paint a sky at the bottom of the umbrella, where people march and die. A chaos, where people live and wither. Pierce the umbrella for a desire of chaos; a poet puts himself as an enemy of the convention, to be able to have an image of chaos and a little sun that burns his skin. The images in performances presented, thereby, were moving in search for the power of producing a spatio-temporal "when" putting the territories in risk areas, mobilizing the signs that format the day-to-day, opening the bodies to a disorganization and the production of an education of affections for the event of a poetic clinic.

Keywords: Experimentations , Images in Performances, Poetic Clinic, Risk

Procedia PDF Downloads 114
884 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 227
883 Arc Interruption Design for DC High Current/Low SC Fuses via Simulation

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

This report summarizes a simulation-based approach to estimate the current interruption behavior of a fuse element utilized in a DC network protecting battery banks under different stresses. Due to internal resistance of the battries, the short circuit current in very close to the nominal current, and it makes the fuse designation tricky. The base configuration considered in this report consists of five fuse units in parallel. The simulations are performed using a multi-physics software package, COMSOL® 5.6, and the necessary material parameters have been calculated using two other software packages.The first phase of the simulation starts with the heating of the fuse elements resulted from the current flow through the fusing element. In this phase, the heat transfer between the metallic strip and the adjacent materials results in melting and evaporation of the filler and housing before the aluminum strip is evaporated and the current flow in the evaporated strip is cut-off, or an arc is eventually initiated. The initiated arc starts to expand, so the entire metallic strip is ablated, and a long arc of around 20 mm is created within the first 3 milliseconds after arc initiation (v_elongation = 6.6 m/s. The final stage of the simulation is related to the arc simulation and its interaction with the external circuitry. Because of the strong ablation of the filler material and venting of the arc caused by the melting and evaporation of the filler and housing before an arc initiates, the arc is assumed to burn in almost pure ablated material. To be able to precisely model this arc, one more step related to the derivation of the transport coefficients of the plasma in ablated urethane was necessary. The results indicate that an arc current interruption, in this case, will not be achieved within the first tens of milliseconds. In a further study, considering two series elements, the arc was interrupted within few milliseconds. A very important aspect in this context is the potential impact of many broken strips parallel to the one where the arc occurs. The generated arcing voltage is also applied to the other broken strips connected in parallel with arcing path. As the gap between the other strips is very small, a large voltage of a few hundred volts generated during the current interruption may eventually lead to a breakdown of another gap. As two arcs in parallel are not stable, one of the arcs will extinguish, and the total current will be carried by one single arc again. This process may be repeated several times if the generated voltage is very large. The ultimate result would be that the current interruption may be delayed.

Keywords: DC network, high current / low SC fuses, FEM simulation, paralle fuses

Procedia PDF Downloads 66
882 Historical Tree Height Growth Associated with Climate Change in Western North America

Authors: Yassine Messaoud, Gordon Nigh, Faouzi Messaoud, Han Chen

Abstract:

The effect of climate change on tree growth in boreal and temperate forests has received increased interest in the context of global warming. However, most studies were conducted in small areas and with a limited number of tree species. Here, we examined the height growth responses of seventeen tree species to climate change in Western North America. 37009 stands from forest inventory databases in Canada and USA with varying establishment date were selected. Dominant and co-dominant trees from each stand were sampled to determine top tree height at 50 years breast height age. Height was related to historical mean annual and summer temperatures, annual and summer Palmer Drought Severity Index, tree establishment date, slope, aspect, soil fertility as determined by the rate of carbon organic matter decomposition (carbon/nitrogen), geographic locations (latitude, longitude, and elevation), species range (coastal, interior, and both ranges), shade tolerance and leaf form (needle leaves, deciduous needle leaves, and broadleaves). Climate change had mostly a positive effect on tree height growth. The results explained 62.4% of the height growth variance. Since 1880, height growth increase was greater for coastal, high shade tolerant, and broadleaf species. Height growth increased more on steep slopes and high soil fertility soils. Greater height growth was mostly observed at the leading range and upward. Conversely, some species showed the opposite pattern probably due to the increase of drought (coastal Mediterranean area), precipitation and cloudiness (Alaska and British Columbia) and peculiarity (higher latitudes-lower elevations and vice versa) of western North America topography. This study highlights the role of the species ecological amplitude and traits, and geographic locations as the main factors determining the growth response and its magnitude to the recent global climate change.

Keywords: Height growth, global climate change, species range, species characteristics, species ecological amplitude, geographic locations, western North America

Procedia PDF Downloads 185
881 Bandgap Engineering of CsMAPbI3-xBrx Quantum Dots for Intermediate Band Solar Cell

Authors: Deborah Eric, Abbas Ahmad Khan

Abstract:

Lead halide perovskites quantum dots have attracted immense scientific and technological interest for successful photovoltaic applications because of their remarkable optoelectronic properties. In this paper, we have simulated CsMAPbI3-xBrx based quantum dots to implement their use in intermediate band solar cells (IBSC). These types of materials exhibit optical and electrical properties distinct from their bulk counterparts due to quantum confinement. The conceptual framework provides a route to analyze the electronic properties of quantum dots. This layer of quantum dots optimizes the position and bandwidth of IB that lies in the forbidden region of the conventional bandgap. A three-dimensional MAPbI3 quantum dot (QD) with geometries including spherical, cubic, and conical has been embedded in the CsPbBr3 matrix. Bound energy wavefunction gives rise to miniband, which results in the formation of IB. If there is more than one miniband, then there is a possibility of having more than one IB. The optimization of QD size results in more IBs in the forbidden region. One band time-independent Schrödinger equation using the effective mass approximation with step potential barrier is solved to compute the electronic states. Envelope function approximation with BenDaniel-Duke boundary condition is used in combination with the Schrödinger equation for the calculation of eigen energies and Eigen energies are solved for the quasi-bound states using an eigenvalue study. The transfer matrix method is used to study the quantum tunneling of MAPbI3 QD through neighbor barriers of CsPbI3. Electronic states are computed using Schrödinger equation with effective mass approximation by considering quantum dot and wetting layer assembly. Results have shown the varying the quantum dot size affects the energy pinning of QD. Changes in the ground, first, second state energies have been observed. The QD is non-zero at the center and decays exponentially to zero at boundaries. Quasi-bound states are characterized by envelope functions. It has been observed that conical quantum dots have maximum ground state energy at a small radius. Increasing the wetting layer thickness exhibits energy signatures similar to bulk material for each QD size.

Keywords: perovskite, intermediate bandgap, quantum dots, miniband formation

Procedia PDF Downloads 165
880 Ultrasonic Micro Injection Molding: Manufacturing of Micro Plates of Biomaterials

Authors: Ariadna Manresa, Ines Ferrer

Abstract:

Introduction: Ultrasonic moulding process (USM) is a recent injection technology used to manufacture micro components. It is able to melt small amounts of material so the waste of material is certainly reduced comparing to microinjection molding. This is an important advantage when the materials are expensive like medical biopolymers. Micro-scaled components are involved in a variety of uses, such as biomedical applications. It is required replication fidelity so it is important to stabilize the process and minimize the variability of the responses. The aim of this research is to investigate the influence of the main process parameters on the filling behaviour, the dimensional accuracy and the cavity pressure when a micro-plate is manufactured by biomaterials such as PLA and PCL. Methodology or Experimental Procedure: The specimens are manufactured using a Sonorus 1G Ultrasound Micro Molding Machine. The used geometry is a rectangular micro-plate of 15x5mm and 1mm of thickness. The materials used for the investigation are PLA and PCL due to biocompatible and degradation properties. The experimentation is divided into two phases. Firstly, the influence of process parameters (vibration amplitude, sonotrodo velocity, ultrasound time and compaction force) on filling behavior is analysed, in Phase 1. Next, when filling cavity is assured, the influence of both cooling time and force compaction on the cavity pressure, part temperature and dimensional accuracy is instigated, which is done in Phase. Results and Discussion: Filling behavior depends on sonotrodo velocity and vibration amplitude. When the ultrasonic time is higher, more ultrasonic energy is applied and the polymer temperature increases. Depending on the cooling time, it is possible that when mold is opened, the micro-plate temperature is too warm. Consequently, the polymer relieve its stored internal energy (ultrasonic and thermal) expanding through the easier direction. This fact is reflected on dimensional accuracy, causing micro-plates thicker than the mold. It has also been observed the most important fact that affects cavity pressure is the compaction configuration during the manufacturing cycle. Conclusions: This research demonstrated the influence of process parameters on the final micro-plated manufactured. Future works will be focused in manufacturing other geometries and analysing the mechanical properties of the specimens.

Keywords: biomaterial, biopolymer, micro injection molding, ultrasound

Procedia PDF Downloads 284