Search results for: community based development
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 40156

Search results for: community based development

2536 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method

Authors: Defne Uz

Abstract:

Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.

Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration

Procedia PDF Downloads 145
2535 Correlation between the Ratios of House Dust Mite-Specific IgE/Total IgE and Asthma Control Test Score as a Biomarker of Immunotherapy Response Effectiveness in Pediatric Allergic Asthma Patients

Authors: Bela Siska Afrida, Wisnu Barlianto, Desy Wulandari, Ery Olivianto

Abstract:

Background: Allergic asthma, caused by IgE-mediated allergic reactions, remains a global health issue with high morbidity and mortality rates. Immunotherapy is the only etiology-based approach to treating asthma, but no standard biomarkers have been established to evaluate the therapy’s effectiveness. This study aims to determine the correlation between the ratios of serum levels of HDM-specific IgE/total IgE and Asthma Control Test (ACT) score as a biomarker of the response to immunotherapy in pediatric allergic asthma patients. Patient and Methods: This retrospective cohort study involved 26 pediatric allergic asthma patients who underwent HDM-specific subcutaneous immunotherapy for 14 weeks at the Pediatric Allergy Immunology Outpatient Clinic at Saiful Anwar General Hospital, Malang. Serum levels of HDM-Specific IgE and Total IgE were measured before and after immunotherapy using Chemiluminescence Immunoassay and Enzyme-linked Immunosorbent Assay (ELISA) method. Changes in asthma control were assessed using the ACT score. The Wilcoxon Signed Ranked Test and Spearman correlation test were used for data analysis. Results: There were 14 boys and 12 girls with a mean age of 6.48 ± 2.54 years. The study showed a significant decrease in serum HMD-specific levels before immunotherapy [9.88 ± 5.74 kuA/L] compared to those of 14 weeks after immunotherapy [4.51 ± 3.98 kuA/L], p = 0.000. Serum Total IgE levels significant decrease before immunotherapy [207.6 ± 120.8IU/ml] compared to those of 14 weeks after immunotherapy [109.83 ± 189.39 IU/mL], p = 0.000. The ratios of serum HDM-specific IgE/total IgE levels significant decrease before immunotherapy [0.063 ± 0.05] compared to those of 14 weeks after immunotherapy [0.041 ± 0.039], p = 0.012. There was also a significant increase in ACT scores before and after immunotherapy (each 15.5 ± 1.79 and 20.96 ± 2.049, p = 0.000). The correlation test showed a weak negative correlation between the ratios of HDM-specific IgE/total IgE levels and ACT score (p = 0.034 and r = -0.29). Conclusion: In conclusion, this study showed that a decrease in HDM-specific IgE levels, total IgE levels, and HDM-specific IgE/total IgE ratios, and an increase in ACT score, was observed after 14 weeks of HDM-specific subcutaneous immunotherapy. The weak negative correlation between the HDM-specific IgE/total IgE ratio and the ACT score suggests that this ratio can serve as a potential biomarker of the effectiveness of immunotherapy in treating pediatric allergic asthma patients.

Keywords: HDM-specific IgE/total IgE ratio, ACT score, immunotherapy, allergic asthma

Procedia PDF Downloads 70
2534 Spatial Distribution of Virus-Transmitting Aphids of Plants in Al Bahah Province, Saudi Arabia

Authors: Sabir Hussain, Muhammad Naeem, Yousif Aldryhim, Susan E. Halbert, Qingjun Wu

Abstract:

Plant viruses annually cause severe economic losses in crop production and globally, different aphid species are responsible for the transmission of such viruses. Additionally, aphids are also serious pests of trees, and agricultural crops. Al Bahah Province, Kingdom of Saudi Arabia (KSA) has a high native and introduced plant species with a temperate climate that provides ample habitats for aphids. In this study, we surveyed virus-transmitting aphids from the Province to highlight their spatial distributions and hot spot areas for their target control strategies. During our fifteen month's survey in Al Bahah Province, three hundred and seventy samples of aphids were collected using both beating sheets and yellow water pan traps. Consequently, fifty-four aphid species representing 30 genera belonging to four families were recorded from Al Bahah Province. Alarmingly, 35 aphid species from our records are virus transmitting species. The most common virus transmitting aphid species based on number of collecting samples, were Macrosiphum euphorbiae (Thomas, 1878), Brachycaudus rumexicolens (Patch, 1917), Uroleucon sonchi (Linnaeus, 1767), Brachycaudus helichrysi (Kaltenbach, 1843), and Myzus persicae (Sulzer, 1776). The numbers of samples for the forementioned species were 66, 24, 23, 22, and 20, respectively. The widest range of plant hosts were found for M. euphorbiae (39 plant species), B. helichrysi (12 plant species), M. persicae (12 plant species), B. rumexicolens (10 plant species), and U. sonchi (9 plant species). The hottest spot areas were found in Al-Baha, Al Mekhwah and Biljarashi cities of the province on the basis of their abundance. This study indicated that Al Bahah Province has relatively rich aphid diversity due to the relatively high plant diversity in a favorable climatic condition. ArcGIS tools can be helpful for biologists to implement the target control strategies against these pests in the integrated pest management, and ultimately to save money and time.

Keywords: Al Bahah province, aphid-virus interaction, biodiversity, global information system

Procedia PDF Downloads 184
2533 Influence of Spelling Errors on English Language Performance among Learners with Dysgraphia in Public Primary Schools in Embu County, Kenya

Authors: Madrine King'endo

Abstract:

This study dealt with the influence of spelling errors on English language performance among learners with dysgraphia in public primary schools in West Embu, Embu County, Kenya. The study purposed to investigate the influence of spelling errors on the English language performance among the class three pupils with dysgraphia in public primary schools. The objectives of the study were to identify the spelling errors that learners with dysgraphia make when writing English words and classify the spelling errors they make. Further, the study will establish how the spelling errors affect the performance of the language among the study participants, and suggest the remediation strategies that teachers could use to address the errors. The study could provide the stakeholders with relevant information in writing skills that could help in developing a responsive curriculum to accommodate the teaching and learning needs of learners with dysgraphia, and probably ensure training of teachers in teacher training colleges is tailored within the writing needs of the pupils with dysgraphia. The study was carried out in Embu county because the researcher did not find any study in related literature review concerning the influence of spelling errors on English language performance among learners with dysgraphia in public primary schools done in the area. Moreover, besides being relatively populated enough for a sample population of the study, the area was fairly cosmopolitan to allow a generalization of the study findings. The study assumed the sampled schools will had class three pupils with dysgraphia who exhibited written spelling errors. The study was guided by two spelling approaches: the connectionist stimulation of spelling process and orthographic autonomy hypothesis with a view to explain how participants with learning disabilities spell written words. Data were collected through interviews, pupils’ exercise books, and progress records, and a spelling test made by the researcher based on the spelling scope set for class three pupils by the ministry of education in the primary education syllabus. The study relied on random sampling techniques in identifying general and specific participants. Since the study used children in schools as participants, voluntary consent was sought from themselves, their teachers and the school head teachers who were their caretakers in a school setting.

Keywords: dysgraphia, writing, language, performance

Procedia PDF Downloads 154
2532 Application of Transportation Models for Analysing Future Intercity and Intracity Travel Patterns in Kuwait

Authors: Srikanth Pandurangi, Basheer Mohammed, Nezar Al Sayegh

Abstract:

In order to meet the increasing demand for housing care for Kuwaiti citizens, the government authorities in Kuwait are undertaking a series of projects in the form of new large cities, outside the current urban area. Al Mutlaa City located to the north-west of the Kuwait Metropolitan Area is one such project out of the 15 planned new cities. The city accommodates a wide variety of residential developments, employment opportunities, commercial, recreational, health care and institutional uses. This paper examines the application of comprehensive transportation demand modeling works undertaken in VISUM platform to understand the future intracity and intercity travel distribution patterns in Kuwait. The scope of models developed varied in levels of detail: strategic model update, sub-area models representing future demand of Al Mutlaa City, sub-area models built to estimate the demand in the residential neighborhoods of the city. This paper aims at offering model update framework that facilitates easy integration between sub-area models and strategic national models for unified traffic forecasts. This paper presents the transportation demand modeling results utilized in informing the planning of multi-modal transportation system for Al Mutlaa City. This paper also presents the household survey data collection efforts undertaken using GPS devices (first time in Kuwait) and notebook computer based digital survey forms for interviewing representative sample of citizens and residents. The survey results formed the basis of estimating trip generation rates and trip distribution coefficients used in the strategic base year model calibration and validation process.

Keywords: innovative methods in transportation data collection, integrated public transportation system, traffic forecasts, transportation modeling, travel behavior

Procedia PDF Downloads 222
2531 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 129
2530 Subtitling in the Classroom: Combining Language Mediation, ICT and Audiovisual Material

Authors: Rossella Resi

Abstract:

This paper describes a project carried out in an Italian school with English learning pupils combining three didactic tools which are attested to be relevant for the success of young learner’s language curriculum: the use of technology, the intralingual and interlingual mediation (according to CEFR) and the cultural dimension. Aim of this project was to test a technological hands-on translation activity like subtitling in a formal teaching context and to exploit its potential as motivational tool for developing listening and writing, translation and cross-cultural skills among language learners. The activities proposed involved the use of professional subtitling software called Aegisub and culture-specific films. The workshop was optional so motivation was entirely based on the pleasure of engaging in the use of a realistic subtitling program and on the challenge of meeting the constraints that a real life/work situation might involve. Twelve pupils in the age between 16 and 18 have attended the afternoon workshop. The workshop was organized in three parts: (i) An introduction where the learners were opened up to the concept and constraints of subtitling and provided with few basic rules on spotting and segmentation. During this session learners had also the time to familiarize with the main software features. (ii) The second part involved three subtitling activities in plenum or in groups. In the first activity the learners experienced the technical dimensions of subtitling. They were provided with a short video segment together with its transcription to be segmented and time-spotted. The second activity involved also oral comprehension. Learners had to understand and transcribe a video segment before subtitling it. The third activity embedded a translation activity of a provided transcription including segmentation and spotting of subtitles. (iii) The workshop ended with a small final project. At this point learners were able to master a short subtitling assignment (transcription, translation, segmenting and spotting) on their own with a similar video interview. The results of these assignments were above expectations since the learners were highly motivated by the authentic and original nature of the assignment. The subtitled videos were evaluated and watched in the regular classroom together with other students who did not take part to the workshop.

Keywords: ICT, L2, language learning, language mediation, subtitling

Procedia PDF Downloads 416
2529 Preferred Leadership Behaviour of Coaches by Athletes in Individual and Team Sports in Nigeria

Authors: Ali Isa Danlami

Abstract:

This study examined the coaching leadership behaviours preferred by athletes in individual and team sports in Nigeria that may lead to increased satisfaction and performance. Six leadership behaviours were identified; these are democratic, training and instruction, situational consideration, autocratic, social support and positive feedback. The six leadership behaviours relate to the preference of coaches by athletes that leads to increased performance were the focus of this study. The population of this study is comprised of male and female athletes of states sports councils in Nigeria. An ex-post facto research design was employed for this study. Stratified and purposive sampling techniques were used to select the sampled states according to the six geo-political zones of the country. Two states (North Central (FCT, Nasarawa), North East (Bauchi, Gombe), North West (Kaduna, Sokoto), South East (Anambra, Imo), South west (Ogun, Ondo), South South (Delta, and Rivers) were selected from each stratum. A modified questionnaire was used to collect data for this study, and the data collected were subjected to a reliability test using the Statistical Package for Social Science (SPSS) to analyse the data. A two sample Z-test procedure was used to test the significant differences because of the large number of subjects involved in the different groups. All hypotheses were tested at 0.05 alpha value. The findings of the study concluded that: Athletes in team and individual sports generally preferred coaches who were more disposed towards training and instructions, social support, positive feedback, situational consideration and democratic behaviours. It was also found that athletes in team sports have higher preference for coaches with democratic behaviour. The result revealed that athletes in team and individual sports did not have a preference for coaches disposed towards autocratic behaviour. Based on this, the following recommendations were made: Democratic behaviour by coaches should be encouraged in team and individual sports. Coaches should not be engaged in autocratic behaviours when coaching. These behaviours should be adopted by coaches to increase athletes’ satisfaction and enhancement in performance.

Keywords: leadership behaviour, preference, athletes, individual, team, coaches’

Procedia PDF Downloads 131
2528 An Association Model to Correlate the Experimentally Determined Mixture Solubilities of Methyl 10-Undecenoate with Methyl Ricinoleate in Supercritical Carbon Dioxide

Authors: V. Mani Rathnam, Giridhar Madras

Abstract:

Fossil fuels are depleting rapidly as the demand for energy, and its allied chemicals are continuously increasing in the modern world. Therefore, sustainable renewable energy sources based on non-edible oils are being explored as a viable option as they do not compete with the food commodities. Oils such as castor oil are rich in fatty acids and thus can be used for the synthesis of biodiesel, bio-lubricants, and many other fine industrial chemicals. There are several processes available for the synthesis of different chemicals obtained from the castor oil. One such process is the transesterification of castor oil, which results in a mixture of fatty acid methyl esters. The main products in the above reaction are methyl ricinoleate and methyl 10-undecenoate. To separate these compounds, supercritical carbon dioxide (SCCO₂) was used as a green solvent. SCCO₂ was chosen as a solvent due to its easy availability, non-toxic, non-flammable, and low cost. In order to design any separation process, the preliminary requirement is the solubility or phase equilibrium data. Therefore, the solubility of a mixture of methyl ricinoleate with methyl 10-undecenoate in SCCO₂ was determined in the present study. The temperature and pressure range selected for the investigation were T = 313 K to 333 K and P = 10 MPa to 18 MPa. It was observed that the solubility (mol·mol⁻¹) of methyl 10-undecenoate varied from 2.44 x 10⁻³ to 8.42 x 10⁻³ whereas it varied from 0.203 x 10⁻³ to 6.28 x 10⁻³ for methyl ricinoleate within the chosen operating conditions. These solubilities followed a retrograde behavior (characterized by the decrease in the solubility values with the increase in temperature) throughout the range of investigated operating conditions. An association theory model, coupled with regular solution theory for activity coefficients, was developed in the present study. The deviation from the experimental data using this model can be quantified using the average absolute relative deviation (AARD). The AARD% for the present compounds is 4.69 and 8.08 for methyl 10-undecenoate and methyl ricinoleate, respectively in a mixture of methyl ricinoleate and methyl 10-undecenoate. The maximum solubility enhancement of 32% was observed for the methyl ricinoleate in a mixture of methyl ricinoleate and methyl 10-undecenoate. The highest selectivity of SCCO₂ was observed to be 12 for methyl 10-undecenoate in a mixture of methyl ricinoleate and methyl 10-undecenoate.

Keywords: association theory, liquid mixtures, solubilities, supercritical carbon dioxide

Procedia PDF Downloads 134
2527 Purification and Characterization of a Novel Extracellular Chitinase from Bacillus licheniformis LHH100

Authors: Laribi-Habchi Hasiba, Bouanane-Darenfed Amel, Drouiche Nadjib, Pausse André, Mameri Nabil

Abstract:

Chitin, a linear 1, 4-linked N-acetyl-d-glucosamine (GlcNAc) polysaccharide is the major structural component of fungal cell walls, insect exoskeletons and shells of crustaceans. It is one of the most abundant naturally occurring polysaccharides and has attracted tremendous attention in the fields of agriculture, pharmacology and biotechnology. Each year, a vast amount of chitin waste is released from the aquatic food industry, where crustaceans (prawn, crab, Shrimp and lobster) constitute one of the main agricultural products. This creates a serious environmental problem. This linear polymer can be hydrolyzed by bases, acids or enzymes such as chitinase. In this context an extracellular chitinase (ChiA-65) was produced and purified from a newly isolated LHH100. Pure protein was obtained after heat treatment and ammonium sulphate precipitation followed by Sephacryl S-200 chromatography. Based on matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF/MS) analysis, the purified enzyme is a monomer with a molecular mass of 65,195.13 Da. The sequence of the 27 N-terminal residues of the mature ChiA-65 showed high homology with family-18 chitinases. Optimal activity was achieved at pH 4 and 75◦C. Among the inhibitors and metals tested p-chloromercuribenzoic acid, N-ethylmaleimide, Hg2+ and Hg + completelyinhibited enzyme activity. Chitinase activity was high on colloidal chitin, glycol chitin, glycol chitosane, chitotriose and chitooligosaccharide. Chitinase activity towards synthetic substrates in the order of p-NP-(GlcNAc) n (n = 2–4) was p-NP-(GlcNAc)2> p-NP-(GlcNAc)4> p-NP-(GlcNAc)3. Our results suggest that ChiA-65 preferentially hydrolyzed the second glycosidic link from the non-reducing end of (GlcNAc) n. ChiA-65 obeyed Michaelis Menten kinetics the Km and kcat values being 0.385 mg, colloidal chitin/ml and5000 s−1, respectively. ChiA-65 exhibited remarkable biochemical properties suggesting that this enzyme is suitable for bioconversion of chitin waste.

Keywords: Bacillus licheniformis LHH100, characterization, extracellular chitinase, purification

Procedia PDF Downloads 437
2526 New Drug Discoveries and Packaging Challenges

Authors: Anupam Chanda

Abstract:

Presently Packaging plays a significant role for drug discoveries. The process of selecting materials and the type of packaging also offers an opportunity for the Packaging scientist to look for biological delivery choices. Most injectable protein products were supplied in some sort of glass vial, prefilled syringe, cartridge. Those product having high Ph content there is a chance of “delamination “from inner surface of glass vial. With protein-based drugs, the biggest issue is the effect of packaging derivatives on the protein’s threedimensional and surface structure. These are any effects that relate to denaturation or aggregation of the protein due to oxidation or interactions from contaminants or impurities in the preparation. The potential for these effects needs to be carefully considered in choosing the container and the container closure system to avoid putting patients in jeopardy. Cause of Delamination : -Formulations with a high pH include phosphate and citrate buffers increase the risk of glass delamination. -High alkali content in glass could accelerate erosion. -High temperature during the vial-forming process increase the risk of glass delamination. -Terminal sterilization (irradiated at 20-40 kGy for 150 min) also is a risk factor for specific products(veterinary parenteral administration),could cause delamination. -High product-storage temperatures and long exposure times can increase the rate and severity of glass delamination. How to prevent Delamination -Treating the surface of the glass vials with materials, such as ammonium sulfate or siliconization can reduce the rate of glass erosion. -Consider alternative sterilization methods only in rare cases. -The correct specification for the glass to ensure its suitability for the pH of the product. -Use Cyclic olefin copolymer(COC)/Cyclic olefin Polymer(COP) Adsorption of protein and Solutions: Option#1 Coat with linear methoxylated polyglycerol and hyperbranchedmethoxylated polyglycerol. Option#2 Thehyperbranched non-methoxylated coating performed best. Option#3 Coat with hyperbranched polyglycerol Option#4 Right selection of Sterilization of glass vial/syringe.

Keywords: delamination of glass, ptrotien adoptions inside the glass surface, extractable & leachable solutions, injectable designs for new drugs

Procedia PDF Downloads 94
2525 Analyzing the Place of Technology in Communication: Case Study of Kenya during COVID-19

Authors: Josephine K. Mule, Levi Obonyo

Abstract:

Technology has changed human life over time. The COVID-19 pandemic has altered the work set-up, the school system, the shopping experience, church attendance, and even the way athletes train in Kenya. Although the use of technology to communicate and maintain interactions has been on the rise in the last 30 years, the uptake during the COVID-19 pandemic has been unprecedented. Traditionally, ‘paid’ work has been considered to take place outside the “home house” but COVID-19 has resulted in what is now being referred to as “the world’s largest work-from-home experiment” with up to 43 percent of employees working at least some of the time remotely. This study was conducted on 90 respondents from across remote work set-ups, school systems, merchants and customers of online shopping, church leaders and congregants and athletes, and their coaches. Data were collected by questionnaires and interviews that were conducted online. The data is based on the first three months since the first case of coronavirus was reported in Kenya. This study found that the use of technology is in the center of working remotely with work interactions being propelled on various online platforms including, Zoom, Microsoft Teams, and Google Meet, among others. The school system has also integrated the use of technology, including students defending their thesis/dissertations online and university graduations being conducted virtually. Kenya is known for its long-distance runners, due to the directives to reduce interactions; coaches have taken to providing their athletes with guidance on training on social media using applications such as WhatsApp. More local stores are now offering the shopping online option to their customers. Churches have also felt the brunt of the situation, especially because of the restrictions on crowds resulting in online services becoming more popular in 2020 than ever before. Artists, innovatively have started online musical concerts. The findings indicate that one of the outcomes in the Kenyan society that is evident as a result of the COVID-19 period is a population that is using technology more to communicate and get work done. Vices that have thrived in this season where the use of technology has increased, include the spreading of rumors on social media and cyberbullying. The place of technology seems to have been cemented by demand during this period.

Keywords: communication, coronavirus, COVID-19, Kenya, technology

Procedia PDF Downloads 139
2524 Towards Learning Query Expansion

Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier

Abstract:

The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.

Keywords: supervised leaning, classification, query expansion, association rules

Procedia PDF Downloads 325
2523 Effects of Particle Size Distribution of Binders on the Performance of Slag-Limestone Ternary Cement

Authors: Zhuomin Zou, Thijs Van Landeghem, Elke Gruyaert

Abstract:

Using supplementary cementitious materials, such as blast-furnace slag and limestone, to replace cement clinker is a promising method to reduce the carbon emissions from cement production. To efficiently use slag and limestone, it is necessary to carefully select the particle size distribution (PSD) of the binders. This study investigated the effects of the PSD of binders on the performance of slag-limestone ternary cement. The Portland cement (PC) was prepared by grinding 95% clinker + 5% gypsum. Based on the PSD parameters of the binders, three types of ternary cements with a similar overall PSD were designed, i.e., NO.1 fine slag, medium PC, and coarse limestone; NO.2 fine limestone, medium PC, and coarse slag; NO.3. fine PC, medium slag, and coarse limestone. The binder contents in the ternary cements were (a) 50 % PC, 40 % slag, and 10 % limestone (called high cement group) or (b) 35 % PC, 55 % slag, and 10 % limestone (called low cement group). The pure PC and binary cement with 50% slag and 50% PC prepared with the same binders as the ternary cement were considered as reference cements. All these cements were used to investigate the mortar performance in terms of workability, strength at 2, 7, 28, and 90 days, carbonation resistance, and non-steady state chloride migration resistance at 28 and 56 days. Results show that blending medium PC with fine slag could exhibit comparable performance to blending fine PC with medium/coarse slag in binary cement. For the three ternary cements in the high cement group, ternary cement with fine limestone (NO.2) shows the lowest strength, carbonation, and chloride migration performance. Ternary cements with fine slag (NO.1) and with fine PC (NO.3) show the highest flexural strength at early and late ages, respectively. In addition, compared with ternary cement with fine PC (NO.3), ternary cement with fine slag (NO.1) has a similar carbonation resistance and a better chloride migration resistance. For the low cement group, three ternary cements have a similar flexural and compressive strength before 7 days. After 28 days, ternary cement with fine limestone (NO.2) shows the highest flexural strength while fine PC (NO.3) has the highest compressive strength. In addition, ternary cement with fine slag (NO.1) shows a better chloride migration resistance but a lower carbonation resistance compared with the other two ternary cements. Moreover, the durability performance of ternary cement with fine PC (NO.3) is better than that of fine limestone (NO.2).

Keywords: limestone, particle size distribution, slag, ternary cement

Procedia PDF Downloads 126
2522 Exponential Stabilization of a Flexible Structure via a Delayed Boundary Control

Authors: N. Smaoui, B. Chentouf

Abstract:

The boundary stabilization problem of the rotating disk-beam system is a topic of interest in research studies. This system involves a flexible beam attached to the center of a disk, and the control and stabilization of this system have been extensively studied. This research focuses on the case where the center of mass is fixed in an inertial frame, and the rotation of the center is non-uniform. The system is represented by a set of nonlinear coupled partial differential equations and ordinary differential equations. The boundary stabilization problem of this system via a delayed boundary control is considered. We assume that the boundary control is either of a force type control or a moment type control and is subject to the presence of a constant time-delay. The aim of this research is threefold: First, we demonstrate that the rotating disk-beam system is well-posed in an appropriate functional space. Then, we establish the exponential stability property of the system. Finally, we provide numerical simulations that illustrate the theoretical findings. The research utilizes the semigroup theory to establish the well-posedness of the system. The resolvent method is then employed to prove the exponential stability property. Finally, the finite element method is used to demonstrate the theoretical results through numerical simulations. The research findings indicate that the rotating disk-beam system can be stabilized using a boundary control with a time delay. The proof of stability is based on the resolvent method and a variation of constants formula. The numerical simulations further illustrate the theoretical results. The findings have potential implications for the design and implementation of control strategies in similar systems. In conclusion, this research demonstrates that the rotating disk-beam system can be stabilized using a boundary control with time delay. The well-posedness and exponential stability properties are established through theoretical analysis, and these findings are further supported by numerical simulations. The research contributes to the understanding and practical application of control strategies for flexible structures, providing insights into the stability of rotating disk-beam systems.

Keywords: rotating disk-beam, delayed force control, delayed moment control, torque control, exponential stability

Procedia PDF Downloads 75
2521 'Coping with Workplace Violence' Workshop: A Commendable Addition to the Curriculum for BA in Nursing

Authors: Ilana Margalith, Adaya Meirowitz, Sigalit Cohavi

Abstract:

Violence against health professionals by patients and their families have recently become a disturbing phenomenon worldwide, exacting psychological as well as economic tolls. Health workplaces in Israel (e.g. hospitals and H.M.O clinics) provide workshops for their employees, supplying them with coping strategies. However, these workshops do not focus on nursing students, who are also subjected to this violence. Their learning environment is no longer as protective as it used to be. Furthermore, coping with violence was not part of the curriculum for Israeli nursing students. Thus, based on human aggression theories which depict the pivotal role of the professional's correct response in preventing the onset of an aggressive response or the escalation of violence, a workshop was developed for undergraduate nursing students at the Clalit Nursing Academy, Rabin Campus (Dina), Israel. The workshop aimed at reducing students' anxiety vis a vis the aggressive patient or family in addition to strengthening their ability to cope with such situations. The students practiced interpersonal skills, especially relevant to early detection of potential violence, as well as ‘a correct response’ reaction to the violence, thus developing the necessary steps to be implemented when encountering violence in the workplace. In order to assess the efficiency of the workshop, the participants filled out a questionnaire comprising knowledge and self-efficacy scales. Moreover, the replies of the 23 participants in this workshop were compared with those of 24 students who attended a standard course on interpersonal communication. Students' self-efficacy and knowledge were measured in both groups before and after the course. A statistically significant interaction was found between group (workshop/standard course) and time (before/after) as to the influence on students' self-efficacy (p=0.004) and knowledge (p=0.007). Nursing students, who participated in this ‘coping with workplace violence’ workshop, gained knowledge, confidence and a sense of self-efficacy with regard to workplace violence. Early detection of signs of imminent violence amongst patients or families and the prevention of its escalation, as well as the ability to manage the threatening situation when occurring, are acquired skills. Encouraging nursing students to learn and practice these skills may enhance their ability to cope with these unfortunate occurrences.

Keywords: early detection of violence, nursing students, patient aggression, self-efficacy, workplace violence

Procedia PDF Downloads 138
2520 Optimization of Waste Plastic to Fuel Oil Plants' Deployment Using Mixed Integer Programming

Authors: David Muyise

Abstract:

Mixed Integer Programming (MIP) is an approach that involves the optimization of a range of decision variables in order to minimize or maximize a particular objective function. The main objective of this study was to apply the MIP approach to optimize the deployment of waste plastic to fuel oil processing plants in Uganda. The processing plants are meant to reduce plastic pollution by pyrolyzing the waste plastic into a cleaner fuel that can be used to power diesel/paraffin engines, so as (1) to reduce the negative environmental impacts associated with plastic pollution and also (2) to curb down the energy gap by utilizing the fuel oil. A programming model was established and tested in two case study applications that are, small-scale applications in rural towns and large-scale deployment across major cities in the country. In order to design the supply chain, optimal decisions on the types of waste plastic to be processed, size, location and number of plants, and downstream fuel applications were concurrently made based on the payback period, investor requirements for capital cost and production cost of fuel and electricity. The model comprises qualitative data gathered from waste plastic pickers at landfills and potential investors, and quantitative data obtained from primary research. It was found out from the study that a distributed system is suitable for small rural towns, whereas a decentralized system is only suitable for big cities. Small towns of Kalagi, Mukono, Ishaka, and Jinja were found to be the ideal locations for the deployment of distributed processing systems, whereas Kampala, Mbarara, and Gulu cities were found to be the ideal locations initially utilize the decentralized pyrolysis technology system. We conclude that the model findings will be most important to investors, engineers, plant developers, and municipalities interested in waste plastic to fuel processing in Uganda and elsewhere in developing economy.

Keywords: mixed integer programming, fuel oil plants, optimisation of waste plastics, plastic pollution, pyrolyzing

Procedia PDF Downloads 129
2519 Factors Associated with Recurrence and Long-Term Survival in Younger and Postmenopausal Women with Breast Cancer

Authors: Sopit Tubtimhin, Chaliya Wamaloon, Anchalee Supattagorn

Abstract:

Background and Significance: Breast cancer is the most frequently diagnosed and leading cause of cancer death among women. This study aims to determine factors potentially predicting recurrence and long-term survival after the first recurrence in surgically treated patients between postmenopausal and younger women. Methods and Analysis: A retrospective cohort study was performed on 498 Thai women with invasive breast cancer, who had undergone mastectomy and been followed-up at Ubon Ratchathani Cancer Hospital, Thailand. We collected based on a systematic chart audit from medical records and pathology reports between January 1, 2002, and December 31, 2011. The last follow-up time point for surviving patients was December 31, 2016. A Cox regression model was used to calculate hazard ratios of recurrence and death. Findings: The median age was 49 (SD ± 9.66) at the time of diagnosis, 47% was post-menopausal women ( ≥ 51years and not experienced any menstrual flow for a minimum of 12 months), and 53 % was younger women ( ˂ 51 years and have menstrual period). Median time from the diagnosis to the last follow-up or death was 10.81 [95% CI = 9.53-12.07] years in younger cases and 8.20 [95% CI = 6.57-9.82] years in postmenopausal cases. The recurrence-free survival (RFS) for younger estimates at 1, 5 and 10 years of 95.0 %, 64.0% and 58.93% respectively, appeared slightly better than the 92.7%, 58.1% and 53.1% for postmenopausal women [HRadj = 1.25, 95% CI = 0.95-1.64]. Regarding overall survival (OS) for younger at 1, 5 and 10 years were 97.7%, 72.7 % and 52.7% respectively, for postmenopausal patients, OS at 1, 5 and 10 years were 95.7%, 70.0% and 44.5 respectively, there were no significant differences in survival [HRadj = 1.23, 95% CI = 0.94 -1.64]. Multivariate analysis identified five risk factors for negatively impacting on survival were triple negative [HR= 2.76, 95% CI = 1.47-5.19], Her2-enriched [HR = 2.59, 95% CI = 1.37-4.91], luminal B [HR = 2.29, 95 % CI=1.35-3.89], not free margin [HR = 1.98, 95%CI=1.00-3.96] and patients who received only adjuvant chemotherapy [HR= 3.75, 95% CI = 2.00-7.04]. Statistically significant risks of overall cancer recurrence were Her2-enriched [HR = 5.20, 95% CI = 2.75-9.80], triple negative [HR = 3.87, 95% CI = 1.98-7.59], luminal B [HR= 2.59, 95% CI = 1.48-4.54,] and patients who received only adjuvant chemotherapy [HR= 2.59, 95% CI = 1.48-5.66]. Discussion and Implications: Outcomes from this studies have shown that postmenopausal women have been associated with increased risk of recurrence and mortality. As the results, it provides useful information for planning the screening and treatment of early-stage breast cancer in the future.

Keywords: breast cancer, menopause status, recurrence-free survival, overall survival

Procedia PDF Downloads 163
2518 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term

Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu

Abstract:

In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.

Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records

Procedia PDF Downloads 217
2517 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 167
2516 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 159
2515 Constructivism and Situational Analysis as Background for Researching Complex Phenomena: Example of Inclusion

Authors: Radim Sip, Denisa Denglerova

Abstract:

It’s impossible to capture complex phenomena, such as inclusion, with reductionism. The most common form of reductionism is the objectivist approach, where processes and relationships are reduced to entities and clearly outlined phases, with a consequent search for relationships between them. Constructivism as a paradigm and situational analysis as a methodological research portfolio represent a way to avoid the dominant objectivist approach. They work with a situation, i.e. with the essential blending of actors and their environment. Primary transactions are taking place between actors and their surroundings. Researchers create constructs based on their need to solve a problem. Concepts therefore do not describe reality, but rather a complex of real needs in relation to the available options how such needs can be met. For examination of a complex problem, corresponding methodological tools and overall design of the research are necessary. Using an original research on inclusion in the Czech Republic as an example, this contribution demonstrates that inclusion is not a substance easily described, but rather a relationship field changing its forms in response to its actors’ behaviour and current circumstances. Inclusion consists of dynamic relationship between an ideal, real circumstances and ways to achieve such ideal under the given circumstances. Such achievement has many shapes and thus cannot be captured by description of objects. It can be expressed in relationships in the situation defined by time and space. Situational analysis offers tools to examine such phenomena. It understands a situation as a complex of dynamically changing aspects and prefers relationships and positions in the given situation over a clear and final definition of actors, entities, etc. Situational analysis assumes creation of constructs as a tool for solving a problem at hand. It emphasizes the meanings that arise in the process of coordinating human actions, and the discourses through which these meanings are negotiated. Finally, it offers “cartographic tools” (situational maps, socials worlds / arenas maps, positional maps) that are able to capture the complexity in other than linear-analytical ways. This approach allows for inclusion to be described as a complex of phenomena taking place with a certain historical preference, a complex that can be overlooked if analyzed with a more traditional approach.

Keywords: constructivism, situational analysis, objective realism, reductionism, inclusion

Procedia PDF Downloads 149
2514 A Telecoupling Lens to Study Global Sustainability Entanglements along Supply Chains: The Case of Dutch-Kenyan Rose Trade

Authors: Klara Strecker

Abstract:

During times of globalization, socioeconomic systems have become connected across the world through global supply chains. As a result, consumption and production locations have increasingly become spatially decoupled. This decoupling leads to complex entanglements of systems and sustainability challenges across distances -entanglements which can be conceptualized as telecouplings. Through telecouplings, people and environments across the world have become closely connected, bringing challenges as well as opportunities. Some argue that telecoupling dynamics started taking shape during times of colonization when resources were first traded across the world. An example of such a telecoupling is that of the rose. Every third rose sold in Europe is grown in Kenya and enters the European market through the Dutch flower auction system. Many Kenyan farms are Dutch-owned, closely entangling Kenya and the Netherlands through the trade of roses. Furthermore, the globalization of the flower industry and the resulting shift of production away from the Netherlands and towards Kenya has led to significant changes in the Dutch horticulture sector. However, the sustainability effects of this rose telecoupling is limited neither to the horticulture sector nor to the Netherlands and Kenya. Alongside the flow of roses between these countries come complex financial, knowledge-based, and regulatory flows. The rose telecoupling also creates spillover effects to other countries, such as Ethiopia, and other industries, such as Kenyan tourism. Therefore, telecoupling dynamics create complex entanglements that cut across sectors, environments, communities, and countries, which makes effectively governing and managing telecouplings and their sustainability implications challenging. Indeed, sustainability can no longer be studied in spatial and temporal isolation. This paper aims to map the rose telecoupling’s complex environmental and social interactions to identify points of tension guiding sustainability-targeted interventions. Mapping these interactions will provide a more holistic understanding of the sustainability challenges involved in the Dutch-Kenyan rose trade. This interdisciplinary telecoupling approach reframes and integrates interdisciplinary knowledge about the rose trade between the Netherlands, Kenya, and beyond.

Keywords: Dutch-Kenyan rose trade, globalization, socio-ecological system, sustainability, telecoupling

Procedia PDF Downloads 104
2513 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 130
2512 Understanding the Effects of Lamina Stacking Sequence on Structural Response of Composite Laminates

Authors: Awlad Hossain

Abstract:

Structural weight reduction with improved functionality is one of the targeted desires of engineers, which drives materials and structures to be lighter. One way to achieve this objective is through the replacement of metallic structures with composites. The main advantages of composite materials are to be lightweight and to offer high specific strength and stiffness. Composite materials can be classified in various ways based on the fiber types and fiber orientations. Fiber reinforced composite laminates are prepared by stacking single sheet of continuous fibers impregnated with resin in different orientation to get the desired strength and stiffness. This research aims to understand the effects of Lamina Stacking Sequence (LSS) on the structural response of a symmetric composite laminate, defined by [0/60/-60]s. The Lamina Stacking Sequence (LSS) represents how the layers are stacked together in a composite laminate. The [0/60/-60]s laminate represents a composite plate consists of 6 layers of fibers, which are stacked at 0, 60, -60, -60, 60 and 0 degree orientations. This laminate is also called symmetric (defined by subscript s) as it consists of same material and having identical fiber orientations above and below the mid-plane. Therefore, the [0/60/-60]s, [0/-60/60]s, [60/-60/0]s, [-60/60/0]s, [60/0/-60]s, and [-60/0/60]s represent the same laminate but with different LSS. In this research, the effects of LSS on laminate in-plane and bending moduli was investigated first. The laminate moduli dictate the in-plane and bending deformations upon loading. This research also provided all the setup and techniques for measuring the in-plane and bending moduli, as well as how the stress distribution was assessed. Then, the laminate was subjected to in-plane force load and bending moment. The strain and stress distribution at each ply for different LSS was investigated using the concepts of Macro-Mechanics. Finally, several numerical simulations were conducted using the Finite Element Analysis (FEA) software ANSYS to investigate the effects of LSS on deformations and stress distribution. The FEA results were also compared to the Macro-Mechanics solutions obtained by MATLAB. The outcome of this research helps composite users to determine the optimum LSS requires to minimize the overall deformation and stresses. It would be beneficial to predict the structural response of composite laminates analytically and/or numerically before in-house fabrication.

Keywords: composite, lamina, laminate, lamina stacking sequence, laminate moduli, laminate strength

Procedia PDF Downloads 10
2511 Pharyngealization Spread in Ibbi Dialect of Yemeni Arabic: An Acoustic Study

Authors: Fadhl Qutaish

Abstract:

This paper examines the pharyngealization spread in one of the Yemeni Arabic dialects, namely, Ibbi Arabic (IA). It investigates how pharyngealized sounds spread their acoustic features onto the neighboring vowels and change their default features. This feature has been investigated quietly well in MSA but still has to be deeply studied in the different dialect of Arabic which will bring about a clearer picture of the similarities and the differences among these dialects and help in mapping them based on the way this feature is utilized. Though the studies are numerous, no one of them has illustrated how far in the multi-syllabic word the spread can be and whether it takes a steady or gradient manner. This study tries to fill this gap and give a satisfactory explanation of the pharyngealization spread in Ibbi Dialect. This study is the first step towards a larger investigation of the different dialects of Yemeni Arabic in the future. The data recorded are represented in minimal pairs in which the trigger (pharyngealized or the non-pharyngealized sound) is in the initial or final position of monosyllabic and multisyllabic words. A group of 24 words were divided into four groups and repeated three times by three subjects which will yield 216 tokens that are tested and analyzed. The subjects are three male speakers aged between 28 and 31 with no history of neurological, speaking or hearing problems. All of them are bilingual speakers of Arabic and English and native speakers of Ibbi-Dialect. Recordings were done in a sound-proof room and praat software was used for the analysis and coding of the trajectories of F1 and F2 for the low vowel /a/ to see the effect of pharyngealization on the formant trajectory within the same syllable and in other syllables of the same word by comparing the F1 and F2 formants to the non-pharyngealized environment. The results show that pharyngealization spread is gradient (progressively and regressively). The spread is reflected in the gradual raising of F1 as we move closer towards the trigger and the gradual lowering of F2 as well. The results of the F1 mean values in tri-syllabic words when the trigger is word initially show that there is a raise of 37.9 HZ in the first syllable, 26.8HZ in the second syllable and 14.2HZ in the third syllable. F2 mean values undergo a lowering of 239 HZ in the first syllable, 211.7 HZ in the second syllable and 176.5 in the third syllable. This gradual decrease in the difference of F2 values in the non-pharyngealized and pharyngealized context illustrates that the spread is gradient. A similar result was found when the trigger is word-final which proves that the spread is gradient (progressively and regressively.

Keywords: pharyngealization, Yemeni Arabic, Ibbi dialect, pharyngealization spread

Procedia PDF Downloads 222
2510 Three Issues for Integrating Artificial Intelligence into Legal Reasoning

Authors: Fausto Morais

Abstract:

Artificial intelligence has been widely used in law. Programs are able to classify suits, to identify decision-making patterns, to predict outcomes, and to formalize legal arguments as well. In Brazil, the artificial intelligence victor has been classifying cases to supreme court’s standards. When those programs act doing those tasks, they simulate some kind of legal decision and legal arguments, raising doubts about how artificial intelligence can be integrated into legal reasoning. Taking this into account, the following three issues are identified; the problem of hypernormatization, the argument of legal anthropocentrism, and the artificial legal principles. Hypernormatization can be seen in the Brazilian legal context in the Supreme Court’s usage of the Victor program. This program generated efficiency and consistency. On the other hand, there is a feasible risk of over standardizing factual and normative legal features. Then legal clerks and programmers should work together to develop an adequate way to model legal language into computational code. If this is possible, intelligent programs may enact legal decisions in easy cases automatically cases, and, in this picture, the legal anthropocentrism argument takes place. Such an argument argues that just humans beings should enact legal decisions. This is so because human beings have a conscience, free will, and self unity. In spite of that, it is possible to argue against the anthropocentrism argument and to show how intelligent programs may work overcoming human beings' problems like misleading cognition, emotions, and lack of memory. In this way, intelligent machines could be able to pass legal decisions automatically by classification, as Victor in Brazil does, because they are binding by legal patterns and should not deviate from them. Notwithstanding, artificial intelligent programs can be helpful beyond easy cases. In hard cases, they are able to identify legal standards and legal arguments by using machine learning. For that, a dataset of legal decisions regarding a particular matter must be available, which is a reality in Brazilian Judiciary. Doing such procedure, artificial intelligent programs can support a human decision in hard cases, providing legal standards and arguments based on empirical evidence. Those legal features claim an argumentative weight in legal reasoning and should serve as references for judges when they must decide to maintain or overcome a legal standard.

Keywords: artificial intelligence, artificial legal principles, hypernormatization, legal anthropocentrism argument, legal reasoning

Procedia PDF Downloads 145
2509 Incidence and Risk Factors of Central Venous Associated Infections in a Tunisian Medical Intensive Care Unit

Authors: Ammar Asma, Bouafia Nabiha, Ghammam Rim, Ezzi Olfa, Ben Cheikh Asma, Mahjoub Mohamed, Helali Radhia, Sma Nesrine, Chouchène Imed, Boussarsar Hamadi, Njah Mansour

Abstract:

Background: Central venous catheter associated infections (CVC-AI) are among the serious hospital-acquired infections. The aims of this study are to determine the incidence of CVC-AI, and their risk factors among patients followed in a Tunisian medical intensive care unit (ICU). Materials / Methods: A prospective cohort study conducted between September 15th, 2015 and November 15th, 2016 in an 8-bed medical ICU including all patients admitted for more than 48h. CVC-AI were defined according to CDC of ATLANTA criteria. The enrollment was based on clinical and laboratory diagnosis of CVC-AI. For all subjects, age, sex, underlying diseases, SAPS II score, ICU length of stay, exposure to CVC (number of CVC placed, site of insertion and duration catheterization) were recorded. Risk factors were analyzed by conditional stepwise logistic regression. The p-value of < 0.05 was considered significant. Results: Among 192 eligible patients, 144 patients (75%) had a central venous catheter. Twenty-eight patients (19.4%) had developed CVC-AI with density rate incidence 20.02/1000 CVC-days. Among these infections, 60.7% (n=17) were systemic CVC-AI (with negative blood culture), and 35.7% (n=10) were bloodstream CVC-AI. The mean SAPS II of patients with CVC-AI was 32.76 14.48; their mean Charlson index was 1.77 1.55, their mean duration of catheterization was 15.46 10.81 days and the mean duration of one central line was 5.8+/-3.72 days. Gram-negative bacteria was determined in 53.5 % of CVC-AI (n= 15) dominated by multi-drug resistant Acinetobacter baumani (n=7). Staphylococci were isolated in 3 CVC-AI. Fourteen (50%) patients with CVC-AI died. Univariate analysis identified men (p=0.034), the referral from another hospital department (p=0.03), tobacco (p=0.006), duration of sedation (p=0.003) and the duration of catheterization (p=0), as possible risk factors of CVC-AI. Multivariate analysis showed that independent factors of CVC-AI were, male sex; OR= 5.73, IC 95% [2; 16.46], p=0.001, Ramsay score; OR= 1.57, IC 95% [1.036; 2.38], p=0.033, and duration of catheterization; OR=1.093, IC 95% [1.035; 1.15], p=0.001. Conclusion: In a monocenter cohort, CVC-AI had a high density and is associated with poor outcome. Identifying the risk factors is necessary to find solutions for this major health problem.

Keywords: central venous catheter associated infection, intensive care unit, prospective cohort studies, risk factors

Procedia PDF Downloads 361
2508 Simplifying Writing Composition to Assist Students in Rural Areas: An Experimental Study for the Comparison of Guided and Unguided Instruction

Authors: Neha Toppo

Abstract:

Method and strategies of teaching instruction highly influence learning of students. In second language teaching, number of ways and methods has been suggested by different scholars and researchers through times. The present article deals with the role of teaching instruction in developing compositional ability of students in writing. It focuses on the secondary level students of rural areas, whose exposure to English language is limited and they face challenges even in simple compositions. The students till high school suffer with their disability in writing formal letter, application, essay, paragraph etc. They face problem in note making, writing answers in examination using their own words and depend fully on rote learning. It becomes difficult for them to give language to their own ideas. Teaching writing composition deserves special attention as writing is an integral part of language learning and students at this level are expected to have sound compositional ability for it is useful in numerous domains. Effective method of instruction could help students to learn expression of self, correct selection of vocabulary and grammar, contextual writing, composition of formal and informal writing. It is not limited to school but continues to be important in various other fields outside the school such as in newspaper and magazine, official work, legislative work, material writing, academic writing, personal writing, etc. The study is based on the experimental method, which hypothesize that guided instruction will be more effective in teaching writing compositions than usual instruction in which students are left to compose by their own without any help. In the test, students of one section are asked to write an essay on the given topic without guidance and another section are asked to write the same but with the assistance of guided instruction in which students have been provided with a few vocabulary and sentence structure. This process is repeated in few more schools to get generalize data. The study shows the difference on students’ performance using both the instructions; guided and unguided. The conclusion of the study is followed by the finding that writing skill of the students is quite poor but with the help of guided instruction they perform better. The students are in need of better teaching instruction to develop their writing skills.

Keywords: composition, essay, guided instruction, writing skill

Procedia PDF Downloads 279
2507 Evaluating the Performance of Passive Direct Methanol Fuel Cell under Varying Operating and Structural Conditions

Authors: Rahul Saraswat

Abstract:

More recently, a focus is given on replacing machined stainless steel metal flow-fields with inexpensive wiremesh current collectors. The flow-fields are based on simple woven wiremesh screens of various stainless steels, which are sandwiched between a thin metal plate of the same material to create a bipolar plate/flow-field configuration for use in a stack. Major advantages of using stainless steel wire screens include the elimination of expensive raw materials as well as machining and/or other special fabrication costs. Objective of the project is to improve the performance of the passive direct methanol fuel cell without increasing the cost of the cell and to make it as compact and light as possible. From the literature survey, it was found that very little is done in this direction & the following methodology was used. 1.) The passive DMFC cell can be made more compact, lighter and less costly by changing the material used in its construction. 2.) Controlling the fuel diffusion rate through the cell improves the performance of the cell. A passive liquid feed direct methanol fuel cell ( DMFC ) was fabricated using given MEA( Membrane Electrode Assembly ) and tested for different current collector structure. Mesh current collectors of different mesh densities, along with different support structures, were used, and the performance was found to be better. Methanol concentration was also varied. Optimisation of mesh size, support structure and fuel concentration was achieved. Cost analysis was also performed hereby. From the performance analysis study of DMFC, we can conclude with the following points : Area specific resistance (ASR) of wiremesh current collectors is lower than ASR of stainless steel current collectors. Also, the power produced by wiremesh current collectors is always more than that produced by stainless steel current collectors. Low or moderate methanol concentrations should be used for better and stable DMFC performance. Wiremesh is a good substitute of stainless steel for current collector plates of passive DMFC because of lower cost( by about 27 %), flexibility and light in weight characteristics of wiremesh.

Keywords: direct methanol fuel cell, membrane electrode assembly, mesh, mesh size, methanol concentration and support structure

Procedia PDF Downloads 68