Search results for: strength prediction models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11658

Search results for: strength prediction models

738 Experimental Evaluation of Foundation Settlement Mitigations in Liquefiable Soils using Press-in Sheet Piling Technique: 1-g Shake Table Tests

Authors: Md. Kausar Alam, Ramin Motamed

Abstract:

The damaging effects of liquefaction-induced ground movements have been frequently observed in past earthquakes, such as the 2010-2011 Canterbury Earthquake Sequence (CES) in New Zealand and the 2011 Tohoku earthquake in Japan. To reduce the consequences of soil liquefaction at shallow depths, various ground improvement techniques have been utilized in engineering practice, among which this research is focused on experimentally evaluating the press-in sheet piling technique. The press-in sheet pile technique eliminates the vibration, hammering, and noise pollution associated with dynamic sheet pile installation methods. Unfortunately, there are limited experimental studies on the press-in sheet piling technique for liquefaction mitigation using 1g shake table tests in which all the controlling mechanisms of liquefaction-induced foundation settlement, including sand ejecta, can be realistically reproduced. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada, Reno, to evaluate the performance of this technique in liquefiable soil layers. First, a 1/5 size model was developed based on a recent UC San Diego shaking table experiment. The scaled model has a density of 50% for the top crust, 40% for the intermediate liquefiable layer, and 85% for the bottom dense layer. Second, a shallow foundation is seated atop an unsaturated sandy soil crust. Third, in a series of tests, a sheet pile with variable embedment depth is inserted into the liquefiable soil using the press-in technique surrounding the shallow foundations. The scaled models are subjected to harmonic input motions with amplitude and dominant frequency properly scaled based on the large-scale shake table test. This study assesses the performance of the press-in sheet piling technique in terms of reductions in the foundation movements (settlement and tilt) and generated excess pore water pressures. In addition, this paper discusses the cost-effectiveness and carbon footprint features of the studied mitigation measures.

Keywords: excess pore water pressure, foundation settlement, press-in sheet pile, soil liquefaction

Procedia PDF Downloads 97
737 A Network Economic Analysis of Friendship, Cultural Activity, and Homophily

Authors: Siming Xie

Abstract:

In social networks, the term homophily refers to the tendency of agents with similar characteristics to link with one another and is so robustly observed across many contexts and dimensions. The starting point of my research is the observation that the “type” of agents is not a single exogenous variable. Agents, despite their differences in race, religion, and other hard to alter characteristics, may share interests and engage in activities that cut across those predetermined lines. This research aims to capture the interactions of homophily effects in a model where agents have two-dimension characteristics (i.e., race and personal hobbies such as basketball, which one either likes or dislikes) and with biases in meeting opportunities and in favor of same-type friendships. A novel feature of my model is providing a matching process with biased meeting probability on different dimensions, which could help to understand the structuring process in multidimensional networks without missing layer interdependencies. The main contribution of this study is providing a welfare based matching process for agents with multi-dimensional characteristics. In particular, this research shows that the biases in meeting opportunities on one dimension would lead to the emergence of homophily on the other dimension. The objective of this research is to determine the pattern of homophily in network formations, which will shed light on our understanding of segregation and its remedies. By constructing a two-dimension matching process, this study explores a method to describe agents’ homophilous behavior in a social network with multidimension and construct a game in which the minorities and majorities play different strategies in a society. It also shows that the optimal strategy is determined by the relative group size, where society would suffer more from social segregation if the two racial groups have a similar size. The research also has political implications—cultivating the same characteristics among agents helps diminishing social segregation, but only if the minority group is small enough. This research includes both theoretical models and empirical analysis. Providing the friendship formation model, the author first uses MATLAB to perform iteration calculations, then derives corresponding mathematical proof on previous results, and last shows that the model is consistent with empirical evidence from high school friendships. The anonymous data comes from The National Longitudinal Study of Adolescent Health (Add Health).

Keywords: homophily, multidimension, social networks, friendships

Procedia PDF Downloads 170
736 Optimization Approach to Integrated Production-Inventory-Routing Problem for Oxygen Supply Chains

Authors: Yena Lee, Vassilis M. Charitopoulos, Karthik Thyagarajan, Ian Morris, Jose M. Pinto, Lazaros G. Papageorgiou

Abstract:

With globalisation, the need to have better coordination of production and distribution decisions has become increasingly important for industrial gas companies in order to remain competitive in the marketplace. In this work, we investigate a problem that integrates production, inventory, and routing decisions in a liquid oxygen supply chain. The oxygen supply chain consists of production facilities, external third-party suppliers, and multiple customers, including hospitals and industrial customers. The product produced by the plants or sourced from the competitors, i.e., third-party suppliers, is distributed by a fleet of heterogenous vehicles to satisfy customer demands. The objective is to minimise the total operating cost involving production, third-party, and transportation costs. The key decisions for production include production and inventory levels and product amount from third-party suppliers. In contrast, the distribution decisions involve customer allocation, delivery timing, delivery amount, and vehicle routing. The optimisation of the coordinated production, inventory, and routing decisions is a challenging problem, especially when dealing with large-size problems. Thus, we present a two-stage procedure to solve the integrated problem efficiently. First, the problem is formulated as a mixed-integer linear programming (MILP) model by simplifying the routing component. The solution from the first-stage MILP model yields the optimal customer allocation, production and inventory levels, and delivery timing and amount. Then, we fix the previous decisions and solve a detailed routing. In the second stage, we propose a column generation scheme to address the computational complexity of the resulting detailed routing problem. A case study considering a real-life oxygen supply chain in the UK is presented to illustrate the capability of the proposed models and solution method. Furthermore, a comparison of the solutions from the proposed approach with the corresponding solutions provided by existing metaheuristic techniques (e.g., guided local search and tabu search algorithms) is presented to evaluate the efficiency.

Keywords: production planning, inventory routing, column generation, mixed-integer linear programming

Procedia PDF Downloads 112
735 Blended Learning Instructional Approach to Teach Pharmaceutical Calculations

Authors: Sini George

Abstract:

Active learning pedagogies are valued for their success in increasing 21st-century learners’ engagement, developing transferable skills like critical thinking or quantitative reasoning, and creating deeper and more lasting educational gains. 'Blended learning' is an active learning pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. This project aimed to develop a blended learning instructional approach to teaching concepts around pharmaceutical calculations to year 1 pharmacy students. The wrong dose, strength or frequency of a medication accounts for almost a third of medication errors in the NHS therefore, progression to year 2 requires a 70% pass in this calculation test, in addition to the standard progression requirements. Many students were struggling to achieve this requirement in the past. It was also challenging to teach these concepts to students of a large class (> 130) with mixed mathematical abilities, especially within a traditional didactic lecture format. Therefore, short screencasts with voice-over of the lecturer were provided in advance of a total of four teaching sessions (two hours/session), incorporating core content of each session and talking through how they approached the calculations to model metacognition. Links to the screencasts were posted on the learning management. Viewership counts were used to determine that the students were indeed accessing and watching the screencasts on schedule. In the classroom, students had to apply the knowledge learned beforehand to a series of increasingly difficult set of questions. Students were then asked to create a question in group settings (two students/group) and to discuss the questions created by their peers in their groups to promote deep conceptual learning. Students were also given time for question-and-answer period to seek clarifications on the concepts covered. Student response to this instructional approach and their test grades were collected. After collecting and organizing the data, statistical analysis was carried out to calculate binomial statistics for the two data sets: the test grade for students who received blended learning instruction and the test grades for students who received instruction in a standard lecture format in class, to compare the effectiveness of each type of instruction. Student response and their performance data on the assessment indicate that the learning of content in the blended learning instructional approach led to higher levels of student engagement, satisfaction, and more substantial learning gains. The blended learning approach enabled each student to learn how to do calculations at their own pace freeing class time for interactive application of this knowledge. Although time-consuming for an instructor to implement, the findings of this research demonstrate that the blended learning instructional approach improves student academic outcomes and represents a valuable method to incorporate active learning methodologies while still maintaining broad content coverage. Satisfaction with this approach was high, and we are currently developing more pharmacy content for delivery in this format.

Keywords: active learning, blended learning, deep conceptual learning, instructional approach, metacognition, pharmaceutical calculations

Procedia PDF Downloads 172
734 Developing Commitment to Change in Egyptian Modern Bureaucracies

Authors: Nada Basset

Abstract:

Purpose: To examine the nature of the civil service sector as an employer through identifying the likely ways to develop employees’ commitment towards change in the civil service sector. Design/Methodology/Approach: a qualitative research approach was followed. Data was collected via a triangulation of interviews, non-participant observation and archival documents analysis. Non-probability sampling took place with a case-study method applied on a sample of 33 civil servants working in the Egyptian Ministry of State for Administrative Development (MSAD) which is the civil service entity acting as the change agent responsible for managing the government administrative reforms plan in the civil service sector. All study participants were actually working in one of the change projects/programmes and had a minimum of 12 months of service in the civil service. Interviews were digitally recorded and transcribed in the form of MS-Word documents, and data transcripts were analyzed manually using MS-Excel worksheets and main research themes were developed and statistics drawn using those Excel worksheets. Findings: The results demonstrate that developing the civil servant’s commitment towards change may require a number of suggested solutions like (1) employee involvement and participation in the planning and implementation processes, (2) linking the employee support to change to some tangible rewards and incentives, (3) appointing some inspirational change leaders that should act as role models, and (4) as a last resort, enforcing employee’s commitment towards change by coercion and authoritarianism. Practical Implications: it is clear that civil servants’ lack of organizational commitment is not directly related to their level of commitment towards change. The research findings showed that civil servants’ commitment towards change can be raised and promoted by getting them involved in the planning and implementation processes, as this develops some sense of belongingness and ownership, thus there is a fair chance that low organizationally committed civil servants can develop high commitment towards change; given they are provided a favorable environment where they are invited to participate and get involved into the move of change. Originality/Value: the research addresses a relatively new area of ‘developing organizational commitment in modern bureaucracies’ by virtue of investigating the levels of civil servants’ commitment towards their jobs and/or organizations -on one hand- and suggesting different ways of developing their commitment towards administrative reform and change initiatives in the Egyptian civil service sector.

Keywords: change, commitment, Egypt, bureaucracy

Procedia PDF Downloads 483
733 Recommendations for Teaching Word Formation for Students of Linguistics Using Computer Terminology as an Example

Authors: Svetlana Kostrubina, Anastasia Prokopeva

Abstract:

This research presents a comprehensive study of the word formation processes in computer terminology within English and Russian languages and provides listeners with a system of exercises for training these skills. The originality is that this study focuses on a comparative approach, which shows both general patterns and specific features of English and Russian computer terms word formation. The key point is the system of exercises development for training computer terminology based on Bloom’s taxonomy. Data contain 486 units (228 English terms from the Glossary of Computer Terms and 258 Russian terms from the Terminological Dictionary-Reference Book). The objective is to identify the main affixation models in the English and Russian computer terms formation and to develop exercises. To achieve this goal, the authors employed Bloom’s Taxonomy as a methodological framework to create a systematic exercise program aimed at enhancing students’ cognitive skills in analyzing, applying, and evaluating computer terms. The exercises are appropriate for various levels of learning, from basic recall of definitions to higher-order thinking skills, such as synthesizing new terms and critically assessing their usage in different contexts. Methodology also includes: a method of scientific and theoretical analysis for systematization of linguistic concepts and clarification of the conceptual and terminological apparatus; a method of nominative and derivative analysis for identifying word-formation types; a method of word-formation analysis for organizing linguistic units; a classification method for determining structural types of abbreviations applicable to the field of computer communication; a quantitative analysis technique for determining the productivity of methods for forming abbreviations of computer vocabulary based on the English and Russian computer terms, as well as a technique of tabular data processing for a visual presentation of the results obtained. a technique of interlingua comparison for identifying common and different features of abbreviations of computer terms in the Russian and English languages. The research shows that affixation retains its productivity in the English and Russian computer terms formation. Bloom’s taxonomy allows us to plan a training program and predict the effectiveness of the compiled program based on the assessment of the teaching methods used.

Keywords: word formation, affixation, computer terms, Bloom's taxonomy

Procedia PDF Downloads 14
732 Functional Performance Needs of Individuals with Intellectual and Developmental Disabilities

Authors: Noor Taleb Ismael, Areej Abd Al Kareem Al Titi, Ala'a Fayez Jaber

Abstract:

Objectives: To investigate self-perceived functional performance among adults with IDD who are Jordanian residential care and rehabilitation centers residents. Also, to investigate their functional abilities (i.e., motor, and cognitive). In addition, to determine the motor and cognitive predictors of their functional performance. Methods: The study utilized a cross-sectional descriptive design; the sample included 180 individuals with IDD (90 males and 90 females) aged 18 to 75 years. The inclusion criteria encompassed: 1) Adults with a confirmed IDD by their physician’s professional and 2) residents in Jordanian Residential Care and Rehabilitation Centers affiliated with the Jordanian Ministry of Social Development. The exclusion criteria were: 1) bedridden or totally dependent on their care providers; 2) who had an accident or acquired neurological conditions. Researchers conducted semi-structured interviews to complete the outcome measures that include the Canadian Occupational Performance Measure (COPM), the Functional Independence Measure (FIM), the Montreal Cognitive Assessment (MoCA), the Mini-Mental Status Examination (MMSE), and the sociodemographic questionnaire. Data analyses consisted of descriptive statistics, analysis of frequencies, correlation, and regression analyses. Result: Individuals with IDD showed low functional performance in all daily life areas, including self-care, productivity, and leisure; there was severe cognitive impairment and poor independence and functional performance. (COPM Performance M= 1.433, SD±.57021, COPM Satisfaction M= 1.31, SD±.54, FIM M= 3.673, SD± 1.7918). Two predictive models were validated for the COPM performance and FIM total scores. First, significant predictors of high self-perceived functional performance on COPM were high scores on FIM Motor sub scores, FIM cognitive sub scores, young age, and having a high school educational level (R2=0.603, p=0.012). Second, significant predictors of high functional capacity on FIM were a high score on the COPM performance subscale, a high MMSE score, and having a cerebral palsy (CP) diagnosis (R2=0.671, p<0.001). Conclusions: Evaluating functional performance and associated factors is important in rehabilitation to provide better services and improve health and QoL for individuals with IDD. This study suggested conducting future studies targeting integrated individuals with IDD who live with their families in the communities.

Keywords: functional performance, intellectual and developmental disabilty, cognitive abilities, motor abilities

Procedia PDF Downloads 48
731 Removal of Methylene Blue from Aqueous Solution by Adsorption onto Untreated Coffee Grounds

Authors: N. Azouaou, H. Mokaddem, D. Senadjki, K. Kedjit, Z. Sadaoui

Abstract:

Introduction: Water contamination caused by dye industries, including food, leather, textile, plastic, cosmetics, paper-making, printing and dye synthesis, has caused more and more attention, since most dyes are harmful to human being and environments. Untreated coffee grounds were used as a high-efficiency adsorbent for the removal of a cationic dye (methylene blue, MB) from aqueous solution. Characterization of the adsorbent was performed using several techniques such as SEM, surface area (BET), FTIR and pH zero charge. The effects of contact time, adsorbent dose, initial solution pH and initial concentration were systematically investigated. Results showed the adsorption kinetics followed the pseudo-second-order kinetic model. Langmuir isotherm model is in good agreement with the experimental data as compared to Freundlich and D–R models. The maximum adsorption capacity was found equal to 52.63mg/g. In addition, the possible adsorption mechanism was also proposed based on the experimental results. Experimental: The adsorption experiments were carried out in batch at room temperature. A given mass of adsorbent was added to methylene blue (MB) solution and the entirety was agitated during a certain time. The samples were carried out at quite time intervals. The concentrations of MB left in supernatant solutions after different time intervals were determined using a UV–vis spectrophotometer. The amount of MB adsorbed per unit mass of coffee grounds (qt) and the dye removal efficiency (R %) were evaluated. Results and Discussion: Some chemical and physical characteristics of coffee grounds are presented and the morphological analysis of the adsorbent was also studied. Conclusions: The good capacity of untreated coffee grounds to remove MB from aqueous solution was demonstrated in this study, highlighting its potential for effluent treatment processes. The kinetic experiments show that the adsorption is rapid and maximum adsorption capacities qmax= 52.63mg/g achieved in 30min. The adsorption process is a function of the adsorbent concentration, pH and metal ion concentration. The optimal parameters found are adsorbent dose m=5g, pH=5 and ambient temperature. FTIR spectra showed that the principal functional sites taking part in the sorption process included carboxyl and hydroxyl groups.

Keywords: adsorption, methylene blue, coffee grounds, kinetic study

Procedia PDF Downloads 232
730 Multifunctional Epoxy/Carbon Laminates Containing Carbon Nanotubes-Confined Paraffin for Thermal Energy Storage

Authors: Giulia Fredi, Andrea Dorigato, Luca Fambri, Alessandro Pegoretti

Abstract:

Thermal energy storage (TES) is the storage of heat for later use, thus filling the gap between energy request and supply. The most widely used materials for TES are the organic solid-liquid phase change materials (PCMs), such as paraffin. These materials store/release a high amount of latent heat thanks to their high specific melting enthalpy, operate in a narrow temperature range and have a tunable working temperature. However, they suffer from a low thermal conductivity and need to be confined to prevent leakage. These two issues can be tackled by confining PCMs with carbon nanotubes (CNTs). TES applications include the buildings industry, solar thermal energy collection and thermal management of electronics. In most cases, TES systems are an additional component to be added to the main structure, but if weight and volume savings are key issues, it would be advantageous to embed the TES functionality directly in the structure. Such multifunctional materials could be employed in the automotive industry, where the diffusion of lightweight structures could complicate the thermal management of the cockpit environment or of other temperature sensitive components. This work aims to produce epoxy/carbon structural laminates containing CNT-stabilized paraffin. CNTs were added to molten paraffin in a fraction of 10 wt%, as this was the minimum amount at which no leakage was detected above the melting temperature (45°C). The paraffin/CNT blend was cryogenically milled to obtain particles with an average size of 50 µm. They were added in various percentages (20, 30 and 40 wt%) to an epoxy/hardener formulation, which was used as a matrix to produce laminates through a wet layup technique, by stacking five plies of a plain carbon fiber fabric. The samples were characterized microstructurally, thermally and mechanically. Differential scanning calorimetry (DSC) tests showed that the paraffin kept its ability to melt and crystallize also in the laminates, and the melting enthalpy was almost proportional to the paraffin weight fraction. These thermal properties were retained after fifty heating/cooling cycles. Laser flash analysis showed that the thermal conductivity through the thickness increased with an increase of the PCM, due to the presence of CNTs. The ability of the developed laminates to contribute to the thermal management was also assessed by monitoring their cooling rates through a thermal camera. Three-point bending tests showed that the flexural modulus was only slightly impaired by the presence of the paraffin/CNT particles, while a more sensible decrease of the stress and strain at break and the interlaminar shear strength was detected. Optical and scanning electron microscope images revealed that these could be attributed to the preferential location of the PCM in the interlaminar region. These results demonstrated the feasibility of multifunctional structural TES composites and highlighted that the PCM size and distribution affect the mechanical properties. In this perspective, this group is working on the encapsulation of paraffin in a sol-gel derived organosilica shell. Submicron spheres have been produced, and the current activity focuses on the optimization of the synthesis parameters to increase the emulsion efficiency.

Keywords: carbon fibers, carbon nanotubes, lightweight materials, multifunctional composites, thermal energy storage

Procedia PDF Downloads 160
729 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 268
728 The Maps of Meaning (MoM) Consciousness Theory

Authors: Scott Andersen

Abstract:

Perhaps simply and rather unadornedly, consciousness is having multiple goals for action and the continuously adjudication of such goals to implement action, referred to as the Maps of Meaning (MoM) Consciousness Theory. The MoM theory triangulates through three parallel corollaries, action (behavior), mechanism (morphology/pathophysiology), and goals (teleology). (1) An organism’s consciousness contains a fluid, nested goals. These goals are not intentionality, but intersectionality, embodiment meeting the world. i.e., Darwinian inclusive fitness or randomization, then survival of the fittest. These goals form via gradual descent under inclusive fitness, the goals being the abstraction of a ‘match’ between the evolutionary environment and organism. Human consciousness implements the brain efficiency hypothesis, genetics, epigenetics, and experience crystallize efficiencies, not necessitating best or objective but fitness, i.e., perceived efficiency based on one’s adaptive environment. These efficiencies are objectively arbitrary, but determine the operation and level of one’s consciousness, termed extreme thrownness. Since inclusive fitness drives efficiencies in physiologic mechanism, morphology and behavior (action) and originates one’s goals, embodiment is necessarily entangled to human consciousness as its the intersection of mechanism or action (both necessitating embodiment) occurring in the world that determines fitness. Perception is the operant process of consciousness and is the consciousness’ de facto goal adjudication process. Goal operationalization is fundamentally efficiency-based via one’s unique neuronal mapping as a byproduct of genetics, epigenetics, and experience. Perception involves information intake and information discrimination, equally underpinned by efficiencies of inclusive fitness via extreme thrownness. Perception isn’t a ‘frame rate,’ but Bayesian priors of efficiency based on one’s extreme thrownness. Consciousness and human consciousness is a modular (i.e., a scalar level of richness, which builds up like building blocks) and dimensionalized (i.e., cognitive abilities become possibilities as emergent phenomena at various modularities, like stratified factors in factor analysis). The meta dimensions of human consciousness seemingly include intelligence quotient, personality (five-factor model), richness of perception intake, and richness of perception discrimination, among other potentialities. Future consciousness research should utilize factor analysis to parse modularities and dimensions of human consciousness and animal models.

Keywords: consciousness, perception, prospection, embodiment

Procedia PDF Downloads 60
727 The Environmental Impacts of Textiles Reuse and Recycling: A Review on Life-Cycle-Assessment Publications

Authors: Samuele Abagnato, Lucia Rigamonti

Abstract:

Life-Cycle-Assessment (LCA) is an effective tool to quantify the environmental impacts of reuse models and recycling technologies for textiles. In this work, publications in the last ten years about LCA on textile waste are classified according to location, goal and scope, functional unit, waste composition, impact assessment method, impact categories, and sensitivity analysis. Twenty papers have been selected: 50% are focused only on recycling, 30% only on reuse, the 15% on both, while only one paper considers only the final disposal of the waste. It is found that reuse is generally the best way to decrease the environmental impacts of textiles waste management because of the avoided impacts of manufacturing a new item. In the comparison between a product made with recycled yarns and a product from virgin materials, in general, the first option is less impact, especially for the categories of climate change, water depletion, and land occupation, while for other categories, such as eutrophication or ecotoxicity, under certain conditions the impacts of the recycled fibres can be higher. Cultivation seems to have quite high impacts when natural fibres are involved, especially in the land use and water depletion categories, while manufacturing requires a remarkable amount of electricity, with its associated impact on climate change. In the analysis of the reuse processes, relevant importance is covered by the laundry phase, with water consumption and impacts related to the use of detergents. About the sensitivity analysis, it can be stated that one of the main variables that influence the LCA results and that needs to be further investigated in the modeling of the LCA system about this topic is the substitution rate between recycled and virgin fibres, that is the amount of recycled material that can be used in place of virgin one. Related to this, also the yield of the recycling processes has a strong influence on the results of the impact. The substitution rate is also important in the modeling of the reuse processes because it represents the number of avoided new items bought in place of the reused ones. Another aspect that appears to have a large influence on the impacts is consumer behaviour during the use phase (for example, the number of uses between two laundry cycles). In conclusion, to have a deeper knowledge of the impacts of a life-cycle approach of textile waste, further data and research are needed in the modeling of the substitution rate and of the use phase habits of the consumers.

Keywords: environmental impacts, life-cycle-assessment, textiles recycling, textiles reuse, textiles waste management

Procedia PDF Downloads 89
726 Factors Affecting Cesarean Section among Women in Qatar Using Multiple Indicator Cluster Survey Database

Authors: Sahar Elsaleh, Ghada Farhat, Shaikha Al-Derham, Fasih Alam

Abstract:

Background: Cesarean section (CS) delivery is one of the major concerns both in developing and developed countries. The rate of CS deliveries are on the rise globally, and especially in Qatar. Many socio-economic, demographic, clinical and institutional factors play an important role for cesarean sections. This study aims to investigate factors affecting the prevalence of CS among women in Qatar using the UNICEF’s Multiple Indicator Cluster Survey (MICS) 2012 database. Methods: The study has focused on the women’s questionnaire of the MICS, which was successfully distributed to 5699 participants. Following study inclusion and exclusion criteria, a final sample of 761 women aged 19- 49 years who had at least one delivery of giving birth in their lifetime before the survey were included. A number of socio-economic, demographic, clinical and institutional factors, identified through literature review and available in the data, were considered for the analyses. Bivariate and multivariate logistic regression models, along with a multi-level modeling to investigate clustering effect, were undertaken to identify the factors that affect CS prevalence in Qatar. Results: From the bivariate analyses the study has shown that, a number of categorical factors are statistically significantly associated with the dependent variable (CS). When identifying the factors from a multivariate logistic regression, the study found that only three categorical factors -‘age of women’, ‘place at delivery’ and ‘baby weight’ appeared to be significantly affecting the CS among women in Qatar. Although the MICS dataset is based on a cluster survey, an exploratory multi-level analysis did not show any clustering effect, i.e. no significant variation in results at higher level (households), suggesting that all analyses at lower level (individual respondent) are valid without any significant bias in results. Conclusion: The study found a statistically significant association between the dependent variable (CS delivery) and age of women, frequency of TV watching, assistance at birth and place of birth. These results need to be interpreted cautiously; however, it can be used as evidence-base for further research on cesarean section delivery in Qatar.

Keywords: cesarean section, factors, multiple indicator cluster survey, MICS database, Qatar

Procedia PDF Downloads 116
725 Opening of North Sea Route and Geopolitics in Arctic: Impact and Possibilities of Route

Authors: Nikkey Keshri

Abstract:

Arctic is a polar region located at the north of the earth. This consists of the Arctic Ocean and other parts of Canada, Russia, the United States, Denmark, Norway, Sweden, Finland, and Iceland. Arctic has vast natural resources which are exploited with modern technology, and the economic opening up of Russia has given new opportunities. All these states have connected with the Arctic region for economic activities and this effect the region ecology. The pollution problem is a serious threat to the people health living around pollution sources. Due to the prevailing worldwide sea and air currents, the Arctic area is the fallout region for long-range transport pollutants, and in some places the concentrations exceed the levels of densely populated urban areas. The Arctic is especially vulnerable to the effects of global warming, as has become apparent in the melting sea ice in recent years. Climate models predict much greater warming in the Arctic than the global average, resulting in significant international attention to the region. The global warming has an adverse impact on the climate, indigenous people, wildlife, and infrastructure. However, there are several opportunities that have emerged in the form of shipping routes, resources, and new territories. The shipping route through the Arctic is a reality and is currently navigable for a few weeks during summers. There are large deposits of oil and gas, minerals and fish and the surrounding countries with Arctic coastlines are becoming quite assertive about exercising their sovereignty over the newfound wealth. The main part of the research is that how the opening of Northern Sea Route is providing opportunities or problem in the Arctic and it is becoming geopolitically important. It focuses on the interest Arctic and non Arctic states, their present and anticipated global geopolitical aims. The Northern Sea Route might open up due to climate changes and that Iceland might benefit or has an impact from the situation. Efforts will be made to answer the research question: ‘Whether Opening of North Sea Route is providing opportunities or becoming a risk for Arctic region?’ Every research has a structure which usually called design. In this research, both Qualitative and Quantitative method is used in terms of various literature, maps, pie- charts, etc to find out the answer for the research question. The aim of this research is to find out the impact of Opening of North Sea Route over Arctic region and how this make arctic geopolitically important. The aim behind this research is to find out the impact of climate change and how the particular geographical area is being affected.

Keywords: climate change, geopolitics, international relation, Northern Sea Route

Procedia PDF Downloads 258
724 Bionaut™: A Minimally Invasive Microsurgical Platform to Treat Non-Communicating Hydrocephalus in Dandy-Walker Malformation

Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher

Abstract:

The Dandy-Walker malformation (DWM) represents a clinical syndrome manifesting as a combination of posterior fossa cyst, hypoplasia of the cerebellar vermis, and obstructive hydrocephalus. Anatomic hallmarks include hypoplasia of the cerebellar vermis, enlargement of the posterior fossa, and cystic dilatation of the fourth ventricle. Current treatments of DWM, including shunting of the cerebral spinal fluid ventricular system and endoscopic third ventriculostomy (ETV), are frequently clinically insufficient, require additional surgical interventions, and carry risks of infections and neurological deficits. Bionaut Labs develops an alternative way to treat Dandy-Walker Malformation (DWM) associated with non-communicating hydrocephalus. We utilize our discreet microsurgical Bionaut™ particles that are controlled externally and remotely to perform safe, accurate, effective fenestration of the Dandy-Walker cyst, specifically in the posterior fossa of the brain, to directly normalize intracranial pressure. Bionaut™ allows for complex non-linear trajectories not feasible by any conventional surgical techniques. The microsurgical particle safely reaches targets in the lower occipital section of the brain. Bionaut™ offers a minimally invasive surgical alternative to highly involved posterior craniotomy or shunts via direct fenestration of the fourth ventricular cyst at the locus defined by the individual anatomy. Our approach offers significant advantages over the current standards of care in patients exhibiting anatomical challenge(s) as a manifestation of DWM, and therefore, is intended to replace conventional therapeutic strategies. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.

Keywords: Bionaut™, cerebral spinal fluid, CSF, cyst, Dandy-Walker, fenestration, hydrocephalus, micro-robot

Procedia PDF Downloads 221
723 Innovations in the Implementation of Preventive Strategies and Measuring Their Effectiveness Towards the Prevention of Harmful Incidents to People with Mental Disabilities who Receive Home and Community Based Services

Authors: Carlos V. Gonzalez

Abstract:

Background: Providers of in-home and community based services strive for the elimination of preventable harm to the people under their care as well as to the employees who support them. Traditional models of safety and protection from harm have assumed that the absence of incidents of harm is a good indicator of safe practices. However, this model creates an illusion of safety that is easily shaken by sudden and inadvertent harmful events. As an alternative, we have developed and implemented an evidence-based resilient model of safety known as C.O.P.E. (Caring, Observing, Predicting and Evaluating). Within this model, safety is not defined by the absence of harmful incidents, but by the presence of continuous monitoring, anticipation, learning, and rapid response to events that may lead to harm. Objective: The objective was to evaluate the effectiveness of the C.O.P.E. model for the reduction of harm to individuals with mental disabilities who receive home and community based services. Methods: Over the course of 2 years we counted the number of incidents of harm and near misses. We trained employees on strategies to eliminate incidents before they fully escalated. We trained employees to track different levels of patient status within a scale from 0 to 10. Additionally, we provided direct support professionals and supervisors with customized smart phone applications to track and notify the team of changes in that status every 30 minutes. Finally, the information that we collected was saved in a private computer network that analyzes and graphs the outcome of each incident. Result and conclusions: The use of the COPE model resulted in: A reduction in incidents of harm. A reduction the use of restraints and other physical interventions. An increase in Direct Support Professional’s ability to detect and respond to health problems. Improvement in employee alertness by decreasing sleeping on duty. Improvement in caring and positive interaction between Direct Support Professionals and the person who is supported. Developing a method to globally measure and assess the effectiveness of prevention from harm plans. Future applications of the COPE model for the reduction of harm to people who receive home and community based services are discussed.

Keywords: harm, patients, resilience, safety, mental illness, disability

Procedia PDF Downloads 447
722 Conceptualizing Personalized Learning: Review of Literature 2007-2017

Authors: Ruthanne Tobin

Abstract:

As our data-driven, cloud-based, knowledge-centric lives become ever more global, mobile, and digital, educational systems everywhere are struggling to keep pace. Schools need to prepare students to become critical-thinking, tech-savvy, life-long learners who are engaged and adaptable enough to find their unique calling in a post-industrial world of work. Recognizing that no nation can afford poor achievement or high dropout rates without jeopardizing its social and economic future, the thirty-two nations of the OECD are launching initiatives to redesign schools, generally under the banner of Personalized Learning or 21st Century Learning. Their intention is to transform education by situating students as co-enquirers and co-contributors with their teachers of what, when, and how learning happens for each individual. In this focused review of the 2007-2017 literature on personalized learning, the author sought answers to two main questions: “What are the theoretical frameworks that guide personalized learning?” and “What is the conceptual understanding of the model?” Ultimately, the review reveals that, although the research area is overly theorized and under-substantiated, it does provide a significant body of knowledge about this potentially transformative educational restructuring. For example, it addresses the following questions: a) What components comprise a PL model? b) How are teachers facilitating agency (voice & choice) in their students? c) What kinds of systems, processes and procedures are being used to guide the innovation? d) How is learning organized, monitored and assessed? e) What role do inquiry based models play? f) How do teachers integrate the three types of knowledge: Content, pedagogical and technological? g) Which kinds of forces enable, and which impede, personalizing learning? h) What is the nature of the collaboration among teachers? i) How do teachers co-regulate differentiated tasks? One finding of the review shows that while technology can dramatically expand access to information, expectations of its impact on teaching and learning are often disappointing unless the technologies are paired with excellent pedagogies in order to address students’ needs, interests and aspirations. This literature review fills a significant gap in this emerging field of research, as it serves to increase conceptual clarity that has hampered both the theorizing and the classroom implementation of a personalized learning model.

Keywords: curriculum change, educational innovation, personalized learning, school reform

Procedia PDF Downloads 223
721 Comparison of Spiral Circular Coil and Helical Coil Structures for Wireless Power Transfer System

Authors: Zhang Kehan, Du Luona

Abstract:

Wireless power transfer (WPT) systems have been widely investigated for advantages of convenience and safety compared to traditional plug-in charging systems. The research contents include impedance matching, circuit topology, transfer distance et al. for improving the efficiency of WPT system, which is a decisive factor in the practical application. What is more, coil structures such as spiral circular coil and helical coil with variable distance between two turns also have indispensable effects on the efficiency of WPT systems. This paper compares the efficiency of WPT systems utilizing spiral or helical coil with variable distance between two turns, and experimental results show that efficiency of spiral circular coil with an optimum distance between two turns is the highest. According to efficiency formula of resonant WPT system with series-series topology, we introduce M²/R₋₁ to measure the efficiency of spiral circular coil and helical coil WPT system. If the distance between two turns s is too close, proximity effect theory shows that the induced current in the conductor, caused by a variable flux created by the current flows in the skin of vicinity conductor, is the opposite direction of source current and has assignable impart on coil resistance. Thus in two coil structures, s affects coil resistance. At the same time, when the distance between primary and secondary coils is not variable, s can also make the influence on M to some degrees. The aforementioned study proves that s plays an indispensable role in changing M²/R₋₁ and then can be adjusted to find the optimum value with which WPT system achieves the highest efficiency. In actual application situations of WPT systems especially in underwater vehicles, miniaturization is one vital issue in designing WPT system structures. Limited by system size, the largest external radius of spiral circular coil is 100 mm, and the largest height of helical coil is 40 mm. In other words, the turn of coil N changes with s. In spiral circular and helical structures, the distance between each two turns in secondary coil is set as a constant value 1 mm to guarantee that the R2 is not variable. Based on the analysis above, we set up spiral circular coil and helical coil model using COMSOL to analyze the value of M²/R₋₁ when the distance between each two turns in primary coil sp varies from 0 mm to 10 mm. In the two structure models, the distance between primary and secondary coils is 50 mm and wire diameter is chosen as 1.5 mm. The turn of coil in secondary coil are 27 in helical coil model and 20 in spiral circular coil model. The best value of s in helical coil structure and spiral circular coil structure are 1 mm and 2 mm respectively, in which the value of M²/R₋₁ is the largest. It is obviously to select spiral circular coil as the first choice to design the WPT system for that the value of M²/R₋₁ in spiral circular coil is larger than that in helical coil under the same condition.

Keywords: distance between two turns, helical coil, spiral circular coil, wireless power transfer

Procedia PDF Downloads 345
720 Measuring Self-Regulation and Self-Direction in Flipped Classroom Learning

Authors: S. A. N. Danushka, T. A. Weerasinghe

Abstract:

The diverse necessities of instruction could be addressed effectively with the support of new dimensions of ICT integrated learning such as blended learning –which is a combination of face-to-face and online instruction which ensures greater flexibility in student learning and congruity of course delivery. As blended learning has been the ‘new normality' in education, many experimental and quasi-experimental research studies provide ample of evidence on its successful implementation in many fields of studies, but it is hard to justify whether blended learning could work similarly in the delivery of technology-teacher development programmes (TTDPs). The present study is bound with the particular research uncertainty, and having considered existing research approaches, the study methodology was set to decide the efficient instructional strategies for flipped classroom learning in TTDPs. In a quasi-experimental pre-test and post-test design with a mix-method research approach, the major study objective was tested with two heterogeneous samples (N=135) identified in a virtual learning environment in a Sri Lankan university. Non-randomized informal ‘before-and-after without control group’ design was employed, and two data collection methods, identical pre-test and post-test and Likert-scale questionnaires were used in the study. Selected two instructional strategies, self-directed learning (SDL) and self-regulated learning (SRL), were tested in an appropriate instructional framework with two heterogeneous samples (pre-service and in-service teachers). Data were statistically analyzed, and an efficient instructional strategy was decided via t-test, ANOVA, ANCOVA. The effectiveness of the two instructional strategy implementation models was decided via multiple linear regression analysis. ANOVA (p < 0.05) shows that age, prior-educational qualifications, gender, and work-experiences do not impact on learning achievements of the two diverse groups of learners through the instructional strategy is changed. ANCOVA (p < 0.05) analysis shows that SDL is efficient for two diverse groups of technology-teachers than SRL. Multiple linear regression (p < 0.05) analysis shows that the staged self-directed learning (SSDL) model and four-phased model of motivated self-regulated learning (COPES Model) are efficient in the delivery of course content in flipped classroom learning.

Keywords: COPES model, flipped classroom learning, self-directed learning, self-regulated learning, SSDL model

Procedia PDF Downloads 197
719 How Defining the Semi-Professional Journalist Is Creating Nuance and a Familiar Future for Local Journalism

Authors: Ross Hawkes

Abstract:

The rise of hyperlocal journalism and its role in the wider local news ecosystem has been debated across both industry and academic circles, particularly via the lens of structures, models, and platforms. The nuances within this sphere are now allowing for the semi-professional journalist to emerge as a key component of the landscape at the fringes of journalism. By identifying and framing the labour of these individuals against a backdrop of change within the professional local newspaper publishing industry, it is possible to address wider debates around the ways in which participants enter and exist in the space between amateur and professional journalism. Considerations around prior experience and understanding allow us to better shape and nuance the hyperlocal landscape in order to understand the challenges and opportunities facing local news via this emergent form of semi-professional journalistic production. The disruption to local news posed by the changing nature of audiences, long-established methods of production, the rise of digital platforms, and increased competition in the online space has brought questions around the very nature and identity of local news, as well as the uncertain future and precarity which surrounds it. While the hyperlocal sector has long been associated as a potential future direction for local journalism through an alternative approach to reporting and as a mechanism for participants to pass between amateurism towards professionalism, there is now a semi-professional space being occupied in a different way. Those framed as semi-professional journalists are not necessarily transiting through this space at the fringes of the professional industry; instead, they are occupying and claiming the space as an entity within itself. By framing the semi-professional journalist through a lens of prior experience and knowledge of the sector, it is possible to identify how their motivations vary from the traditional metrics of financial gain, personal progression, or a sense of civic or community duty. While such factors may be by-products of their labour, the desire of such reporters to recreate and retain experiences and values from their past as a participant or consumer is the central basis of the framework to define the semi-professional journalist. Through understanding the motivations, aims and factors shaping the semi-professional journalist within the wider journalism and hyperlocal journalism debates and landscape, it will be possible to better frame the role they can play in sustaining the longer term provision of local news and addressing broader issues and factors within the sector.

Keywords: hyperlocal, journalism, local news, semi-professionalism

Procedia PDF Downloads 28
718 A Textile-Based Scaffold for Skin Replacements

Authors: Tim Bolle, Franziska Kreimendahl, Thomas Gries, Stefan Jockenhoevel

Abstract:

The therapeutic treatment of extensive, deep wounds is limited. Autologous split-skin grafts are used as a so-called ‘gold standard’. Most common deficits are the defects at the donor site, the risk of scarring as well as the limited availability and quality of the autologous grafts. The aim of this project is a tissue engineered dermal-epidermal skin replacement to overcome the limitations of the gold standard. A key requirement for the development of such a three-dimensional implant is the formation of a functional capillary-like network inside the implant to ensure a sufficient nutrient and gas supply. Tailored three-dimensional warp knitted spacer fabrics are used to reinforce the mechanically week fibrin gel-based scaffold and further to create a directed in vitro pre-vascularization along the parallel-oriented pile yarns within a co-culture. In this study various three-dimensional warp knitted spacer fabrics were developed in a factorial design to analyze the influence of the machine parameters such as the stitch density and the pattern of the fabric on the scaffold performance and further to determine suitable parameters for a successful fibrin gel-incorporation and a physiological performance of the scaffold. The fabrics were manufactured on a Karl Mayer double-bar raschel machine DR 16 EEC/EAC. A fine machine gauge of E30 was used to ensure a high pile yarn density for sufficient nutrient, gas and waste exchange. In order to ensure a high mechanical stability of the graft, the fabrics were made of biocompatible PVDF yarns. Key parameters such as the pore size, porosity and stress/strain behavior were investigated under standardized, controlled climate conditions. The influence of the input parameters on the mechanical and morphological properties as well as the ability of fibrin gel incorporation into the spacer fabric was analyzed. Subsequently, the pile yarns of the spacer fabrics were colonized with Human Umbilical Vein Endothelial Cells (HUVEC) to analyze the ability of the fabric to further function as a guiding structure for a directed vascularization. The cells were stained with DAPI and investigated using fluorescence microscopy. The analysis revealed that the stitch density and the binding pattern have a strong influence on both the mechanical and morphological properties of the fabric. As expected, the incorporation of the fibrin gel was significantly improved with higher pore sizes and porosities, whereas the mechanical strength decreases. Furthermore, the colonization trials revealed a high cell distribution and density on the pile yarns of the spacer fabrics. For a tailored reinforcing structure, the minimum porosity and pore size needs to be evaluated which still ensures a complete incorporation of the reinforcing structure into the fibrin gel matrix. That will enable a mechanically stable dermal graft with a dense vascular network for a sufficient nutrient and oxygen supply of the cells. The results are promising for subsequent research in the field of reinforcing mechanically weak biological scaffolds and develop functional three-dimensional scaffolds with an oriented pre-vascularization.

Keywords: fibrin-gel, skin replacement, spacer fabric, pre-vascularization

Procedia PDF Downloads 257
717 Second Time’s a Charm: The Intervention of the European Patent Office on the Strategic Use of Divisional Applications

Authors: Alissa Lefebre

Abstract:

It might seem intuitive to hope for a fast decision on the patent grant. After all, a granted patent provides you with a monopoly position, which allows you to obstruct others from using your technology. However, this does not take into account the strategic advantages one can obtain from keeping their patent applications pending. First, you have the financial advantage of postponing certain fees, although many applicants would probably agree that this is not the main benefit. As the scope of the patent protection is only decided upon at the grant, the pendency period introduces uncertainty amongst rivals. This uncertainty entails not knowing whether the patent will actually get granted and what the scope of protection will be. Consequently, rivals can only depend upon limited and uncertain information when deciding what technology is worth pursuing. One way to keep patent applications pending, is the use of divisional applications. These applicants can be filed out of a parent application as long as that parent application is still pending. This allows the applicant to pursue (part of) the content of the parent application in another application, as the divisional application cannot exceed the scope of the parent application. In a fast-moving and complex market such as the tele- and digital communications, it might allow applicants to obtain an actual monopoly position as competitors are discouraged to pursue a certain technology. Nevertheless, this practice also has downsides to it. First of all, it has an impact on the workload of the examiners at the patent office. As the number of patent filings have been increasing over the last decades, using strategies that increase this number even more, is not desirable from the patent examiners point of view. Secondly, a pending patent does not provide you with the protection of a granted patent, thus not only create uncertainty for the rivals, but also for the applicant. Consequently, the European patent office (EPO) has come up with a “raising the bar initiative” in which they have decided to tackle the strategic use of divisional applications. Over the past years, two rules have been implemented. The first rule in 2010 introduced a time limit, upon which divisional applications could only be filed within a 24-month limit after the first communication with the patent office. However, after carrying-out a user feedback survey, the EPO abolished the rule again in 2014 and replaced it by a fee mechanism. The fee mechanism is still in place today, which might be an indication of a better result compared to the first rule change. This study tests the impact of these rules on the strategic use of divisional applications in the tele- and digital communication industry and provides empirical evidence on their success. Upon using three different survival models, we find overall evidence that divisional applications prolong the pendency time and that only the second rule is able to tackle the strategic patenting and thus decrease the pendency time.

Keywords: divisional applications, regulatory changes, strategic patenting, EPO

Procedia PDF Downloads 128
716 Nutrition Program Planning Based on Local Resources in Urban Fringe Areas of a Developing Country

Authors: Oktia Woro Kasmini Handayani, Bambang Budi Raharjo, Efa Nugroho, Bertakalswa Hermawati

Abstract:

Obesity prevalence and severe malnutrition in Indonesia has increased from 2007 to 2013. The utilization of local resources in nutritional program planning can be used to program efficiency and to reach the goal. The aim of this research is to plan a nutrition program based on local resources for urban fringe areas in a developing country. This research used a qualitative approach, with a focus on local resources including social capital, social system, cultural system. The study was conducted in Mijen, Central Java, as one of the urban fringe areas in Indonesia. Purposive and snowball sampling techniques are used to determine participants. A total of 16 participants took part in the study. Observation, interviews, focus group discussion, SWOT analysis, brainstorming and Miles and Huberman models were used to analyze the data. We have identified several local resources, such as the contributions from nutrition cadres, social organizations, social financial resources, as well as the cultural system and social system. The outstanding contribution of nutrition cadres is the participation and creativity to improve nutritional status. In addition, social organizations, like the role of the integrated health center for children (Pos Pelayanan Terpadu), can be engaged in the nutrition program planning. This center is supported by House of Nutrition to assist in nutrition program planning, and provide social support to families, neighbors and communities as social capitals. The study also reported that cultural systems that show appreciation for well-nourished children are a better way to improve the problem of balanced nutrition. Social systems such as teamwork and mutual cooperation can also be a potential resource to support nutritional programs and overcome associated problems. The impact of development in urban areas such as the introduction of more green areas which improve the perceived status of local people, as well as new health services facilitated by people and companies, can also be resources to support nutrition programs. Local resources in urban fringe areas can be used in the planning of nutrition programs. The expansion of partnership with all stakeholders, empowering the community through optimizing the roles of nutrition care centers for children as our recommendation with regard to nutrition program planning.

Keywords: developing country, local resources, nutrition program, urban fringe

Procedia PDF Downloads 251
715 Cytokine Profiling in Cultured Endometrial Cells after Hormonal Treatment

Authors: Mark Gavriel, Ariel J. Jaffa, Dan Grisaru, David Elad

Abstract:

The human endometrium-myometrium interface (EMI) is the uterine inner barrier without a separatig layer. It is composed of endometrial epithelial cells (EEC) and endometrial stromal cells (ESC) in the endometrium and myometrial smooth muscle cells (MSMC) in the myometrium. The EMI undergoes structural remodeling during the menstruation cycle which are essential for human reproduction. Recently, we co-cultured a layer-by-layer in vitro model of EEC, ESC and MSMC on a synthetic membrane for mechanobiology experiments. We also treated the model with progesterone and β-estradiol in order to mimic the in vivo receptive uterus In the present study we analyzed the cytokines profile in a single layer of EEC the hormonal treated in vitro model of the EMI. The methodologies of this research include simple tissue-engineering . First, we cultured commercial EEC (RL95-2, ATCC® CRL-1671™) in 24-wellplate. Then, we applied an hormonal stimuli protocol with 17-β-estradiol and progesterone in time dependent concentration according to the human physiology that mimics the menstrual cycle. We collected cell supernatant samples of control, pre-ovulation, ovulation and post-ovulaton periods for analysis of the secreted proteins and cytokines. The cytokine profiling was performed using the Proteome Profiler Human XL Cytokine Array Kit (R&D Systems, Inc., USA) that can detect105 human soluble cytokines. The relative quantification of all the cytokines will be analyzed using xMAP – LUMINEX. We conducted a fishing expedition with the 4 membranes Proteome Profiler. We processed the images, quantified the spots intensity and normalized these values by the negative control and reference spots at the membrane. Analyses of the relative quantities that reflected change higher than 5% of the control points of the kit revealed the The results clearly showed that there are significant changes in the cytokine level for inflammation and angiogenesis pathways. Analysis of tissue-engineered models of the uterine wall will enable deeper investigation of molecular and biomechanical aspects of early reproductive stages (e.g. the window of implantation) or developments of pathologies.

Keywords: tissue-engineering, hormonal stimuli, reproduction, multi-layer uterine model, progesterone, β-estradiol, receptive uterine model, fertility

Procedia PDF Downloads 132
714 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators

Authors: Guenther Schuh, Michael Riesener, Frederic Diels

Abstract:

Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.

Keywords: agile, highly iterative development, agile-indicator, product development

Procedia PDF Downloads 246
713 “I” on the Web: Social Penetration Theory Revised

Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology

Abstract:

The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.

Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information

Procedia PDF Downloads 372
712 Locus of Control and Self-Esteem as Predictors of Maternal and Child Healthcare Services Utilization in Nigeria

Authors: Josephine Aikpitanyi, Friday Okonofua, Lorrettantoimo, Sandy Tubeuf

Abstract:

Every day, 800 women die from conditions related to pregnancy and childbirth, resulting in an estimated 300,000 maternal deaths worldwide per year. Over 99 percent of all maternal deaths occur in developing countries, with more than half of them occurring in sub-Saharan Africa. Nigeria being the most populous nation in sub-Saharan Africa bears a significant burden of worsening maternal and child health outcomes with a maternal mortality rate of 917 per 100,000 live births and child mortality rate of 117 per 1,000 live births. While several studies have documented that financial barriers disproportionately discourage poor women from seeking needed maternal and child healthcare, other studies have indicated otherwise. Evidence shows that there are instances where health facilities with skilled healthcare providers exist, and yet maternal, and child health outcomes remain abysmally low, indicating the presence of non-cognitive and behavioural factors that may affect the utilization of healthcare services. This study investigated the influence of locus of control and self-esteem on utilization of maternal and child healthcare services in Nigeria. Specifically, it explored the differences in utilization of antenatal care, skilled birth care, postnatal care, and child vaccination by women having an internal and external locus of control and women having high and low self-esteem. We collected information on non-cognitive traits of 1411 randomly selected women, along with information on utilization of the various indicators of maternal and child healthcare. Estimating logistic regression models for various components of healthcare services utilization, we found that women’s internal locus of control was a significant predictor of utilization of antenatal care, skilled birth care, and completion of child vaccination. We also found that having high self-esteem was a significant predictor of utilization of antenatal care, postnatal care, and completion of child vaccination after adjusting for other control variables. By improving our understanding of non-cognitive traits as possible barriers to maternal and child healthcare utilization, our findings offer important insights for enhancing participant engagement in intervention programs that are initiated to improve maternal and child health outcomes in low-and-middle-income countries.

Keywords: behavioural economics, health-seeking behaviour, locus of control and self-esteem, maternal and child healthcare, non-cognitive traits, and healthcare utilization

Procedia PDF Downloads 165
711 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 227
710 Diverse High-Performing Teams: An Interview Study on the Balance of Demands and Resources

Authors: Alana E. Jansen

Abstract:

With such a large proportion of organisations relying on the use of team-based structures, it is surprising that so few teams would be classified as high-performance teams. While the impact of team composition on performance has been researched frequently, there have been conflicting findings as to the effects, particularly when examined alongside other team factors. To broaden the theoretical perspectives on this topic and potentially explain some of the inconsistencies in research findings left open by other various models of team effectiveness and high-performing teams, the present study aims to use the Job-Demands-Resources model, typically applied to burnout and engagement, as a framework to examine how team composition factors (particularly diversity in team member characteristics) can facilitate or hamper team effectiveness. This study used a virtual interview design where participants were asked to both rate and describe their experiences, in one high-performing and one low-performing team, over several factors relating to demands, resources, team composition, and team effectiveness. A semi-structured interview protocol was developed, which combined the use of the Likert style and exploratory questions. A semi-targeted sampling approach was used to invite participants ranging in age, gender, and ethnic appearance (common surface-level diversity characteristics) and those from different specialties, roles, educational and industry backgrounds (deep-level diversity characteristics). While the final stages of data analyses are still underway, thematic analysis using a grounded theory approach was conducted concurrently with data collection to identify the point of thematic saturation, resulting in 35 interviews being completed. Analyses examine differences in perceptions of demands and resources as they relate to perceived team diversity. Preliminary results suggest that high-performing and low-performing teams differ in perceptions of the type and range of both demands and resources. The current research is likely to offer contributions to both theory and practice. The preliminary findings suggest there is a range of demands and resources which vary between high and low-performing teams, factors which may play an important role in team effectiveness research going forward. Findings may assist in explaining some of the more complex interactions between factors experienced in the team environment, making further progress towards understanding the intricacies of why only some teams achieve high-performance status.

Keywords: diversity, high-performing teams, job demands and resources, team effectiveness

Procedia PDF Downloads 187
709 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA

Authors: Marek Dosbaba

Abstract:

Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.

Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data

Procedia PDF Downloads 110