Search results for: simple EEG
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3000

Search results for: simple EEG

420 An Approach for Estimating Open Education Resources Textbook Savings: A Case Study

Authors: Anna Ching-Yu Wong

Abstract:

Introduction: Textbooks play a sizable portion of the overall cost of higher education students. It is a board consent that open education resources (OER) reduce the te4xtbook costs and provide students a way to receive high-quality learning materials at little or no cost to them. However, there is less agreement over exactly how much. This study presents an approach for calculating OER savings by using SUNY Canton NON-OER courses (N=233) to estimate the potentially textbook savings for one semester – Fall 2022. The purpose in collecting data is to understand how much potentially saved from using OER materials and to have a record for future further studies. Literature Reviews: In the past years, researchers identified the rising cost of textbooks disproportionately harm students in higher education institutions and how much an average cost of a textbook. For example, Nyamweya (2018) found that on average students save $116.94 per course when OER adopted in place of traditional commercial textbooks by using a simple formula. Student PIRGs (2015) used reports of per-course savings when transforming a course from using a commercial textbook to OER to reach an estimate of $100 average cost savings per course. Allen and Wiley (2016) presented at the 2016 Open Education Conference on multiple cost-savings studies and concluded $100 was reasonable per-course savings estimates. Ruth (2018) calculated an average cost of a textbook was $79.37 per-course. Hilton, et al (2014) conducted a study with seven community colleges across the nation and found the average textbook cost to be $90.61. There is less agreement over exactly how much would be saved by adopting an OER course. This study used SUNY Canton as a case study to create an approach for estimating OER savings. Methodology: Step one: Identify NON-OER courses from UcanWeb Class Schedule. Step two: View textbook lists for the classes (Campus bookstore prices). Step three: Calculate the average textbook prices by averaging the new book and used book prices. Step four: Multiply the average textbook prices with the number of students in the course. Findings: The result of this calculation was straightforward. The average of a traditional textbooks is $132.45. Students potentially saved $1,091,879.94. Conclusion: (1) The result confirms what we have known: Adopting OER in place of traditional textbooks and materials achieves significant savings for students, as well as the parents and taxpayers who support them through grants and loans. (2) The average textbook savings for adopting an OER course is variable depending on the size of the college and as well as the number of enrollment students.

Keywords: textbook savings, open textbooks, textbook costs assessment, open access

Procedia PDF Downloads 44
419 Advanced Separation Process of Hazardous Plastics and Metals from End-Of-Life Vehicles Shredder Residue by Nanoparticle Froth Flotation

Authors: Srinivasa Reddy Mallampati, Min Hee Park, Soo Mim Cho, Sung Hyeon Yoon

Abstract:

One of the issues of End of Life Vehicles (ELVs) recycling promotion is technology for the appropriate treatment of automotive shredder residue (ASR). Owing to its high heterogeneity and variable composition (plastic (23–41%), rubber/elastomers (9–21%), metals (6–13%), glass (10–20%) and dust (soil/sand) etc.), ASR can be classified as ‘hazardous waste’, on the basis of the presence of heavy metals (HMs), PCBs, BFRs, mineral oils, etc. Considering their relevant concentrations, these metals and plastics should be properly recovered for recycling purposes before ASR residues are disposed of. Brominated flame retardant additives in ABS/HIPS and PVC may generate dioxins and furans at elevated temperatures. Moreover, these BFRs additives present in plastic materials may leach into the environment during landfilling operations. ASR thermal process removes some of the organic material but concentrates, the heavy metals and POPs present in the ASR residues. In the present study, Fe/Ca/CaO nanoparticle assisted ozone treatment has been found to selectively hydrophilize the surface of ABS/HIPS and PVC plastics, enhancing its wettability and thereby promoting its separation from ASR plastics by means of froth flotation. The water contact angles, of ABS/HIPS and PVC decreased, about 18.7°, 18.3°, and 17.9° in ASR respectively. Under froth flotation conditions at 50 rpm, about 99.5% and 99.5% of HIPS in ASR samples sank, resulting in a purity of 98% and 99%. Furthermore, at 150 rpm a 100% PVC separation in the settled fraction, with 98% of purity in ASR, respectively. Total recovery of non-ABS/HIPS and PVC plastics reached nearly 100% in the floating fraction. This process improved the quality of recycled ASR plastics by removing surface contaminants or impurities. Further, a hybrid ball-milling and with Fe/Ca/CaO nanoparticle froth flotation process was established for the recovery of HMs from ASR. After ball-milling with Fe/Ca/CaO nanoparticle additives, the flotation efficiency increased to about 55 wt% and the HMs recovery were also increased about 90% for the 0.25 mm size fractions of ASR. Coating with Fe/Ca/CaO nanoparticles associated with subsequent microbubble froth flotation allowed the air bubbles to attach firmly on the HMs. SEM–EDS maps showed that the amounts of HMs were significant on the surface of the floating ASR fraction. This result, along with the low HM concentration in the settled fraction, was confirmed by elemental spectra and semi-quantitative SEM–EDS analysis. Developed hybrid preferential hazardous plastics and metals separation process from ASR is a simple, highly efficient, and sustainable procedure.

Keywords: end of life vehicles shredder residue, hazardous plastics, nanoparticle froth flotation, separation process

Procedia PDF Downloads 255
418 Removal of Chromium by UF5kDa Membrane: Its Characterization, Optimization of Parameters, and Evaluation of Coefficients

Authors: Bharti Verma, Chandrajit Balomajumder

Abstract:

Water pollution is escalated owing to industrialization and random ejection of one or more toxic heavy metal ions from the semiconductor industry, electroplating, metallurgical, mining, chemical manufacturing, tannery industries, etc., In semiconductor industry various kinds of chemicals in wafers preparation are used . Fluoride, toxic solvent, heavy metals, dyes and salts, suspended solids and chelating agents may be found in wastewater effluent of semiconductor manufacturing industry. Also in the chrome plating, in the electroplating industry, the effluent contains heavy amounts of Chromium. Since Cr(VI) is highly toxic, its exposure poses an acute risk of health. Also, its chronic exposure can even lead to mutagenesis and carcinogenesis. On the contrary, Cr (III) which is naturally occurring, is much less toxic than Cr(VI). Discharge limit of hexavalent chromium and trivalent chromium are 0.05 mg/L and 5 mg/L, respectively. There are numerous methods such as adsorption, chemical precipitation, membrane filtration, ion exchange, and electrochemical methods for the heavy metal removal. The present study focuses on the removal of Chromium ions by using flat sheet UF5kDa membrane. The Ultra filtration membrane process is operated above micro filtration membrane process. Thus separation achieved may be influenced due to the effect of Sieving and Donnan effect. Ultrafiltration is a promising method for the rejection of heavy metals like chromium, fluoride, cadmium, nickel, arsenic, etc. from effluent water. Benefits behind ultrafiltration process are that the operation is quite simple, the removal efficiency is high as compared to some other methods of removal and it is reliable. Polyamide membranes have been selected for the present study on rejection of Cr(VI) from feed solution. The objective of the current work is to examine the rejection of Cr(VI) from aqueous feed solutions by flat sheet UF5kDa membranes with different parameters such as pressure, feed concentration and pH of the feed. The experiments revealed that with increasing pressure, the removal efficiency of Cr(VI) is increased. Also, the effect of pH of feed solution, the initial dosage of chromium in the feed solution has been studied. The membrane has been characterized by FTIR, SEM and AFM before and after the run. The mass transfer coefficients have been estimated. Membrane transport parameters have been calculated and have been found to be in a good correlation with the applied model.

Keywords: heavy metal removal, membrane process, waste water treatment, ultrafiltration

Procedia PDF Downloads 110
417 Knowledge, Attitude and Practices of Contraception among the Married Women of Reproductive Age Group in Selected Wards of Dharan Sub-Metropolitan City

Authors: Pratima Thapa

Abstract:

Background: It is very critical to understand that awareness of family planning and proper utilization of contraceptives is an important indicator for reducing maternal and neonatal mortality and morbidity. It also plays an important role in promoting reproductive health of the women in an underdeveloped country like ours. Objective: To assess knowledge, attitude and practices of contraception among married women of reproductive age group in selected wards of Dharan Sub-Metropolitan City. Materials and methods: A cross-sectional descriptive study was conducted among 209 married women of reproductive age. Simple random sampling was used to select the wards, population proportionate sampling for selecting the sample numbers from each wards and purposive sampling for selecting each sample. Semi-structured questionnaire was used to collect data. Descriptive and inferential statistics were used to interpret the data considering p-value 0.05. Results: The mean ± SD age of the respondents was 30.01 ± 8.12 years. Majority 92.3% had ever heard of contraception. Popular known method was Inj. Depo (92.7%). Mass media (85.8%) was the major source of information. Mean percentage score of knowledge was 45.23%.less than half (45%) had adequate knowledge. Majority 90.4% had positive attitude. Only 64.6% were using contraceptives currently. Misbeliefs and fear of side effects were the main reason for not using contraceptives. Education, occupation, and total income of the family was associated with knowledge regarding contraceptives. Results for Binary Logistic Regression showed significant correlates of attitude with distance to the nearest health facility (OR=7.97, p<0.01), education (OR=0.24, p<0.05) and age group (0.03, p<0.01). Regarding practice, likelihood of being current user of contraceptives increased significantly by being literate (OR=5.97, p<0.01), having nuclear family (OR=4.96, p<0.01), living in less than 30 minute walk distance from nearest health facility (OR=3.34, p<0.05), women’s participation in decision making regarding household and fertility choices (OR=5.23, p<0.01) and husband’s support on using contraceptives (OR=9.05, p<0.01). Significant and positive correlation between knowledge-attitude, knowledge-practice and attitude-practice were observed. Conclusion: Results of the study indicates that there is need to increase awareness programs in order to intensify the knowledge and practices of contraception. The positive correlation indorses that better knowledge can lead to positive attitude and hence good practice. Further, projects aiming to increase better counselling about contraceptives, its side effects and the positive effects that outweighs the negative aspects should be enrolled appropriately.

Keywords: attitude, contraceptives, knowledge, practice

Procedia PDF Downloads 226
416 Awarding Copyright Protection to Artificial Intelligence Technology for its Original Works: The New Way Forward

Authors: Vibhuti Amarnath Madhu Agrawal

Abstract:

Artificial Intelligence (AI) and Intellectual Property are two emerging concepts that are growing at a fast pace and have the potential of having a huge impact on the economy in the coming times. In simple words, AI is nothing but work done by a machine without any human intervention. It is a coded software embedded in a machine, which over a period of time, develops its own intelligence and begins to take its own decisions and judgments by studying various patterns of how people think, react to situations and perform tasks, among others. Intellectual Property, especially Copyright Law, on the other hand, protects the rights of individuals and Companies in content creation that primarily deals with application of intellect, originality and expression of the same in some tangible form. According to some of the reports shared by the media lately, ChatGPT, an AI powered Chatbot, has been involved in the creation of a wide variety of original content, including but not limited to essays, emails, plays and poetry. Besides, there have been instances wherein AI technology has given creative inputs for background, lights and costumes, among others, for films. Copyright Law offers protection to all of these different kinds of content and much more. Considering the two key parameters of Copyright – application of intellect and originality, the question, therefore, arises that will awarding Copyright protection to a person who has not directly invested his / her intellect in the creation of that content go against the basic spirit of Copyright laws? This study aims to analyze the current scenario and provide answers to the following questions: a. If the content generated by AI technology satisfies the basic criteria of originality and expression in a tangible form, why should such content be denied protection in the name of its creator, i.e., the specific AI tool / technology? B. Considering the increasing role and development of AI technology in our lives, should it be given the status of a ‘Legal Person’ in law? C. If yes, what should be the modalities of awarding protection to works of such Legal Person and management of the same? Considering the current trends and the pace at which AI is advancing, it is not very far when AI will start functioning autonomously in the creation of new works. Current data and opinions on this issue globally reflect that they are divided and lack uniformity. In order to fill in the existing gaps, data obtained from Copyright offices from the top economies of the world have been analyzed. The role and functioning of various Copyright Societies in these countries has been studied in detail. This paper provides a roadmap that can be adopted to satisfy various objectives, constraints and dynamic conditions related AI technology and its protection under Copyright Law.

Keywords: artificial intelligence technology, copyright law, copyright societies, intellectual property

Procedia PDF Downloads 44
415 Finite Element Modelling of Mechanical Connector in Steel Helical Piles

Authors: Ramon Omar Rosales-Espinoza

Abstract:

Pile-to-pile mechanical connections are used if the depth of the soil layers with sufficient bearing strength exceeds the original (“leading”) pile length, with the additional pile segment being termed “extension” pile. Mechanical connectors permit a safe transmission of forces from leading to extension pile while meeting strength and serviceability requirements. Common types of connectors consist of an assembly of sleeve-type external couplers, bolts, pins, and other mechanical interlock devices that ensure the transmission of compressive, tensile, torsional and bending stresses between leading and extension pile segments. While welded connections allow for a relatively simple structural design, mechanical connections are advantageous over welded connections because they lead to shorter installation times and significant cost reductions since specialized workmanship and inspection activities are not required. However, common practices followed to design mechanical connectors neglect important aspects of the assembly response, such as stress concentration around pin/bolt holes, torsional stresses from the installation process, and interaction between the forces at the installation (torsion), service (compression/tension-bending), and removal stages (torsion). This translates into potentially unsatisfactory designs in terms of the ultimate and service limit states, exhibiting either reduced strength or excessive deformations. In this study, the experimental response under compressive forces of a type of mechanical connector is presented, in terms of strength, deformation and failure modes. The tests revealed that the type of connector used can safely transmit forces from pile to pile. Using the results from the compressive tests, an analysis model was developed using the finite element (FE) method to study the interaction of forces under installation and service stages of a typical mechanical connector. The response of the analysis model is used to identify potential areas for design optimization, including size, gap between leading and extension piles, number of pin/bolts, hole sizes, and material properties. The results show the design of mechanical connectors should take into account the interaction of forces present at every stage of their life cycle, and that the torsional stresses occurring during installation are critical for the safety of the assembly.

Keywords: piles, FEA, steel, mechanical connector

Procedia PDF Downloads 239
414 Development of Technologies for the Treatment of Nutritional Problems in Primary Care

Authors: Marta Fernández Batalla, José María Santamaría García, Maria Lourdes Jiménez Rodríguez, Roberto Barchino Plata, Adriana Cercas Duque, Enrique Monsalvo San Macario

Abstract:

Background: Primary Care Nursing is taking more autonomy in clinical decisions. One of the most frequent therapies to solve is related to the problems of maintaining a sufficient supply of food. Nursing diagnoses related to food are addressed by the nurse-family and community as the first responsible. Objectives and interventions are set according to each patient. To improve the goal setting and the treatment of these care problems, a technological tool is developed to help nurses. Objective: To evaluate the computational tool developed to support the clinical decision in feeding problems. Material and methods: A cross-sectional descriptive study was carried out at the Meco Health Center, Madrid, Spain. The study population consisted of four specialist nurses in primary care. These nurses tested the tool on 30 people with ‘need for nutritional therapy’. Subsequently, the usability of the tool and the satisfaction of the professional were sought. Results: A simple and convenient computational tool is designed for use. It has 3 main entrance fields: age, size, sex. The tool returns the following information: BMI (Body Mass Index) and caloric consumed by the person. The next step is the caloric calculation depending on the activity. It is possible to propose a goal of BMI or weight to achieve. With this, the amount of calories to be consumed is proposed. After using the tool, it was determined that the tool calculated the BMI and calories correctly (in 100% of clinical cases). satisfaction on nutritional assessment was ‘satisfactory’ or ‘very satisfactory’, linked to the speed of operations. As a point of improvement, the options of ‘stress factor’ linked to weekly physical activity. Conclusion: Based on the results, it is clear that the computational tools of decision support are useful in the clinic. Nurses are not only consumers of computational tools, but can develop their own tools. These technological solutions improve the effectiveness of nutrition assessment and intervention. We are currently working on improvements such as the calculation of protein percentages as a function of protein percentages as a function of stress parameters.

Keywords: feeding behavior health, nutrition therapy, primary care nursing, technology assessment

Procedia PDF Downloads 199
413 Epidemiology of Low Back Pain among Nurses Working in Public Hospitals of Addis Ababa, Ethiopia

Authors: Mengestie Mulugeta Belay, Serebe Abay Gebrie, Biruk Lambbiso Wamisho, Amare Worku

Abstract:

Background: Low back pain (LBP) related to nursing profession, is a very common public health problem throughout the world. Various risk factors have been implicated in the etiology and LBP is assumed to be of multi-factorial origin as individual, work-related and psychosocial factors can contribute to its development. Objectives: To determine the prevalence and to identify risk factors of LBP among nurses working in Addis Ababa City Public Hospitals, Ethiopia, in the year 2015. Settings: Addis Ababa University, Black-Lion (‘Tikur Anbessa’) Hospital-BLH, is the country’s highest tertiary level referral and teaching Hospital. The three departments in connection with this study: Radiology, Pathology and Orthopedics, run undergraduate and residency programs and receive referred patients from all over the country. Methods: A cross-sectional study with internal comparison was conducted throughout the period October-December, 2015. Sample was chosen by simple random sampling technique by taken the lists of nurses from human resource departments as a sampling frame. A well-structured, pre-tested and self-administered questionnaire was used to collect quantifiable information. The questionnaire included socio-demographic, back pain features, consequences of back pain, work-related and psychosocial factors. The collected data was entered into EpiInfo version 3.5.4 and was analyzed by SPSS. A probability level of 0.05 or less and 95% confidence level was used to indicate statistical significance. Ethical clearance was obtained from all respected administrative bodies, Hospitals and study participants. Results: The study included 395 nurses and gave a response rate of 91.9%. The mean age was 30.6 (±8.4) years. Majority of the respondents were female (285, 72.2%). Nearly half of the participants (n=181, 45.8% (95% CI (40.8%- 50.6%))) were complained low back pain. There was statistical significant association between low back pain and working shift, physical activities at work; sleep disturbance and felt little pleasure by doing things. Conclusion: A high prevalence of low back pain was found among nurses working in Addis Ababa Public Hospitals. Recognition and preventive measures like providing resting periods should be taken to reduce the risk of low back pain in nurses working in Public hospitals.

Keywords: low back pain, risk factors, nurses, public hospitals

Procedia PDF Downloads 270
412 The Influence of Morphology and Interface Treatment on Organic 6,13-bis (triisopropylsilylethynyl)-Pentacene Field-Effect Transistors

Authors: Daniel Bülz, Franziska Lüttich, Sreetama Banerjee, Georgeta Salvan, Dietrich R. T. Zahn

Abstract:

For the development of electronics, organic semiconductors are of great interest due to their adjustable optical and electrical properties. Especially for spintronic applications they are interesting because of their weak spin scattering, which leads to longer spin life times compared to inorganic semiconductors. It was shown that some organic materials change their resistance if an external magnetic field is applied. Pentacene is one of the materials which exhibit the so called photoinduced magnetoresistance which results in a modulation of photocurrent when varying the external magnetic field. Also the soluble derivate of pentacene, the 6,13-bis (triisopropylsilylethynyl)-pentacene (TIPS-pentacene) exhibits the same negative magnetoresistance. Aiming for simpler fabrication processes, in this work, we compare TIPS-pentacene organic field effect transistors (OFETs) made from solution with those fabricated by thermal evaporation. Because of the different processing, the TIPS-pentacene thin films exhibit different morphologies in terms of crystal size and homogeneity of the substrate coverage. On the other hand, the interface treatment is known to have a high influence on the threshold voltage, eliminating trap states of silicon oxide at the gate electrode and thereby changing the electrical switching response of the transistors. Therefore, we investigate the influence of interface treatment using octadecyltrichlorosilane (OTS) or using a simple cleaning procedure with acetone, ethanol, and deionized water. The transistors consist of a prestructured OFET substrates including gate, source, and drain electrodes, on top of which TIPS-pentacene dissolved in a mixture of tetralin and toluene is deposited by drop-, spray-, and spin-coating. Thereafter we keep the sample for one hour at a temperature of 60 °C. For the transistor fabrication by thermal evaporation the prestructured OFET substrates are also kept at a temperature of 60 °C during deposition with a rate of 0.3 nm/min and at a pressure below 10-6 mbar. The OFETs are characterized by means of optical microscopy in order to determine the overall quality of the sample, i.e. crystal size and coverage of the channel region. The output and transfer characteristics are measured in the dark and under illumination provided by a white light LED in the spectral range from 450 nm to 650 nm with a power density of (8±2) mW/cm2.

Keywords: organic field effect transistors, solution processed, surface treatment, TIPS-pentacene

Procedia PDF Downloads 419
411 Study into the Interactions of Primary Limbal Epithelial Stem Cells and HTCEPI Using Tissue Engineered Cornea

Authors: Masoud Sakhinia, Sajjad Ahmad

Abstract:

Introduction: Though knowledge of the compositional makeup and structure of the limbal niche has progressed exponentially during the past decade, much is yet to be understood. Identifying the precise profile and role of the stromal makeup which spans the ocular surface may inform researchers of the most optimum conditions needed to effectively expand LESCs in vitro, whilst preserving their differentiation status and phenotype. Limbal fibroblasts, as opposed to corneal fibroblasts are thought to form an important component of the microenvironment where LESCs reside. Methods: The corneal stroma was tissue engineered in vitro using both limbal and corneal fibroblasts embedded within a tissue engineered 3D collagen matrix. The effect of these two different fibroblasts on LESCs and hTCEpi corneal epithelial cell line were then subsequently determined using phase contrast microscopy, histolological analysis and PCR for specific stem cell markers. The study aimed to develop an in vitro model which could be used to determine whether limbal, as opposed to corneal fibroblasts, maintained the stem cell phenotype of LESCs and hTCEpi cell line. Results: Tissue culture analysis was inconclusive and required further quantitative analysis for remarks on cell proliferation within the varying stroma. Histological analysis of the tissue-engineered cornea showed a comparable structure to that of the human cornea, though with limited epithelial stratification. PCR results for epithelial cell markers of cells cultured on limbal fibroblasts showed reduced expression of CK3, a negative marker for LESC’s, whilst also exhibiting a relatively low expression level of P63, a marker for undifferentiated LESCs. Conclusion: We have shown the potential for the construction of a tissue engineered human cornea using a 3D collagen matrix and described some preliminary results in the analysis of the effects of varying stroma consisting of limbal and corneal fibroblasts, respectively, on the proliferation of stem cell phenotype of primary LESCs and hTCEpi corneal epithelial cells. Although no definitive marker exists to conclusively illustrate the presence of LESCs, the combination of positive and negative stem cell markers in our study were inconclusive. Though it is less traslational to the human corneal model, the use of conditioned medium from that of limbal and corneal fibroblasts may provide a more simple avenue. Moreover, combinations of extracellular matrices could be used as a surrogate in these culture models.

Keywords: cornea, Limbal Stem Cells, tissue engineering, PCR

Procedia PDF Downloads 251
410 Methodologies for Deriving Semantic Technical Information Using an Unstructured Patent Text Data

Authors: Jaehyung An, Sungjoo Lee

Abstract:

Patent documents constitute an up-to-date and reliable source of knowledge for reflecting technological advance, so patent analysis has been widely used for identification of technological trends and formulation of technology strategies. But, identifying technological information from patent data entails some limitations such as, high cost, complexity, and inconsistency because it rely on the expert’ knowledge. To overcome these limitations, researchers have applied to a quantitative analysis based on the keyword technique. By using this method, you can include a technological implication, particularly patent documents, or extract a keyword that indicates the important contents. However, it only uses the simple-counting method by keyword frequency, so it cannot take into account the sematic relationship with the keywords and sematic information such as, how the technologies are used in their technology area and how the technologies affect the other technologies. To automatically analyze unstructured technological information in patents to extract the semantic information, it should be transformed into an abstracted form that includes the technological key concepts. Specific sentence structure ‘SAO’ (subject, action, object) is newly emerged by representing ‘key concepts’ and can be extracted by NLP (Natural language processor). An SAO structure can be organized in a problem-solution format if the action-object (AO) states that the problem and subject (S) form the solution. In this paper, we propose the new methodology that can extract the SAO structure through technical elements extracting rules. Although sentence structures in the patents text have a unique format, prior studies have depended on general NLP (Natural language processor) applied to the common documents such as newspaper, research paper, and twitter mentions, so it cannot take into account the specific sentence structure types of the patent documents. To overcome this limitation, we identified a unique form of the patent sentences and defined the SAO structures in the patents text data. There are four types of technical elements that consist of technology adoption purpose, application area, tool for technology, and technical components. These four types of sentence structures from patents have their own specific word structure by location or sequence of the part of speech at each sentence. Finally, we developed algorithms for extracting SAOs and this result offer insight for the technology innovation process by providing different perspectives of technology.

Keywords: NLP, patent analysis, SAO, semantic-analysis

Procedia PDF Downloads 241
409 Flexible, Hydrophobic and Mechanical Strong Poly(Vinylidene Fluoride): Carbon Nanotube Composite Films for Strain-Sensing Applications

Authors: Sudheer Kumar Gundati, Umasankar Patro

Abstract:

Carbon nanotube (CNT) – polymer composites have been extensively studied due to their exceptional electrical and mechanical properties. In the present study, poly(vinylidene fluoride) (PVDF) – multi-walled CNT composites were prepared by melt-blending technique using pristine (ufCNT) and a modified dilute nitric acid-treated CNTs (fCNT). Due to this dilute acid-treatment, the fCNTs were found to show significantly improved dispersion and retained their electrical property. The fCNT showed an electrical percolation threshold (PT) of 0.15 wt% in the PVDF matrix as against 0.35 wt% for ufCNT. The composites were made into films of thickness ~0.3 mm by compression-molding and the resulting composite films were subjected to various property evaluations. It was found that the water contact angle (WCA) of the films increased with CNT weight content in composites and the composite film surface became hydrophobic (e.g., WCA ~104° for 4 wt% ufCNT and 111.5° for 0.5 wt% fCNT composites) in nature; while the neat PVDF film showed hydrophilic behavior (WCA ~68°). Significant enhancements in the mechanical properties were observed upon CNT incorporation and there is a progressive increase in the tensile strength and modulus with increase in CNT weight fraction in composites. The composite films were tested for strain-sensing applications. For this, a simple and non-destructive method was developed to demonstrate the strain-sensing properties of the composites films. In this method, the change in electrical resistance was measured using a digital multimeter by applying bending strain by oscillation. It was found that by applying dynamic bending strain, there is a systematic change in resistance and the films showed piezo-resistive behavior. Due to the high flexibility of these composite films, the change in resistance was reversible and found to be marginally affected, when large number of tests were performed using a single specimen. It is interesting to note that the composites with CNT content notwithstanding their type near the percolation threshold (PT) showed better strain-sensing properties as compared to the composites with CNT contents well-above the PT. On account of the excellent combination of the various properties, the composite films offer a great promise as strain-sensors for structural health-monitoring.

Keywords: carbon nanotubes, electrical percolation threshold, mechanical properties, poly(vinylidene fluoride), strain-sensor, water contact angle

Procedia PDF Downloads 216
408 Optimized Deep Learning-Based Facial Emotion Recognition System

Authors: Erick C. Valverde, Wansu Lim

Abstract:

Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.

Keywords: deep learning, face detection, facial emotion recognition, network optimization methods

Procedia PDF Downloads 78
407 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models

Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble

Abstract:

Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.

Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate

Procedia PDF Downloads 189
406 Reconstructing the Segmental System of Proto-Graeco-Phrygian: a Bottom-Up Approach

Authors: Aljoša Šorgo

Abstract:

Recent scholarship on Phrygian has begun to more closely examine the long-held belief that Greek and Phrygian are two very closely related languages. It is now clear that Graeco-Phrygian can be firmly postulated as a subclade of the Indo-European languages. The present paper will focus on the reconstruction of the phonological and phonetic segments of Proto-Graeco-Phrygian (= PGPh.) by providing relevant correspondence sets and reconstructing the classes of segments. The PGPh. basic vowel system consisted of ten phonemic oral vowels: */a e o ā ē ī ō ū/. The correspondences of the vowels are clear and leave little open to ambiguity. There were four resonants and two semi-vowels in PGPh.: */r l m n i̯ u̯/, which could appear in both a consonantal and a syllabic function, with the distribution between the two still being phonotactically predictable. Of note is the fact that the segments *m and *n seem to have merged when their phonotactic position would see them used in a syllabic function. Whether the segment resulting from this merger was a nasalized vowel (most likely *[ã]) or a syllabic nasal *[N̥] (underspecified for place of articulation) cannot be determined at this stage. There were three fricatives in PGPh.: */s h ç/. *s and *h are easily identifiable. The existence of *ç, which may seem unexpected, is postulated on the basis of the correspondence Gr. ὄς ~ Phr. yos/ιος. It is of note that Bozzone has previously proposed the existence of *ç ( < PIE *h₁i̯-) in an early stage of Greek even without taking into account Phrygian data. Finally, the system of stops in PGPh. distinguished four places of articulation (labial, dental, velar, and labiovelar) and three phonation types. The question of which three phonation types were actually present in PGPh. is one of great importance for the ongoing debate on the realization of the three series in PIE. Since the matter is still very much in dispute, we ought to, at this stage, endeavour to reconstruct the PGPh. system without recourse to the other IE languages. The three series of correspondences are: 1. Gr. T (= tenuis) ~ Phr. T; 2. Gr. D (= media) ~ Phr. T; 3. Gr. TA (= tenuis aspirata) ~ Phr. M. The first series must clearly be reconstructed as composed of voiceless stops. The second and third series are more problematic. With a bottom-up approach, neither the second nor the third series of correspondences are compatible with simple modal voicing, and the reflexes differ greatly in voice onset time. Rather, the defining feature distinguishing the two series was [±spread glottis], with ancillary vibration of the vocal cords. In PGPh. the second series was undergoing further spreading of the glottis. As the two languages split, this process would continue, but be affected by dissimilar changes in VOT, which was ultimately phonemicized in both languages as the defining feature distinguishing between their series of stops.

Keywords: bottom-up reconstruction, Proto-Graeco-Phrygian, spread glottis, syllabic resonant

Procedia PDF Downloads 14
405 Plasma Arc Burner for Pulverized Coal Combustion

Authors: Gela Gelashvili, David Gelenidze, Sulkhan Nanobashvili, Irakli Nanobashvili, George Tavkhelidze, Tsiuri Sitchinava

Abstract:

Development of new highly efficient plasma arc combustion system of pulverized coal is presented. As it is well-known, coal is one of the main energy carriers by means of which electric and heat energy is produced in thermal power stations. The quality of the extracted coal decreases very rapidly. Therefore, the difficulties associated with its firing and complete combustion arise and thermo-chemical preparation of pulverized coal becomes necessary. Usually, other organic fuels (mazut-fuel oil or natural gas) are added to low-quality coal for this purpose. The fraction of additional organic fuels varies within 35-40% range. This decreases dramatically the economic efficiency of such systems. At the same time, emission of noxious substances in the environment increases. Because of all these, intense development of plasma combustion systems of pulverized coal takes place in whole world. These systems are equipped with Non-Transferred Plasma Arc Torches. They allow practically complete combustion of pulverized coal (without organic additives) in boilers, increase of energetic and financial efficiency. At the same time, emission of noxious substances in the environment decreases dramatically. But, the non-transferred plasma torches have numerous drawbacks, e.g. complicated construction, low service life (especially in the case of high power), instability of plasma arc and most important – up to 30% of energy loss due to anode cooling. Due to these reasons, intense development of new plasma technologies that are free from these shortcomings takes place. In our proposed system, pulverized coal-air mixture passes through plasma arc area that burns between to carbon electrodes directly in pulverized coal muffler burner. Consumption of the carbon electrodes is low and does not need a cooling system, but the main advantage of this method is that radiation of plasma arc directly impacts on coal-air mixture that accelerates the process of thermo-chemical preparation of coal to burn. To ensure the stability of the plasma arc in such difficult conditions, we have developed a power source that provides fixed current during fluctuations in the arc resistance automatically compensated by the voltage change as well as regulation of plasma arc length over a wide range. Our combustion system where plasma arc acts directly on pulverized coal-air mixture is simple. This should allow a significant improvement of pulverized coal combustion (especially low-quality coal) and its economic efficiency. Preliminary experiments demonstrated the successful functioning of the system.

Keywords: coal combustion, plasma arc, plasma torches, pulverized coal

Procedia PDF Downloads 138
404 Investigations into the in situ Enterococcus faecalis Biofilm Removal Efficacies of Passive and Active Sodium Hypochlorite Irrigant Delivered into Lateral Canal of a Simulated Root Canal Model

Authors: Saifalarab A. Mohmmed, Morgana E. Vianna, Jonathan C. Knowles

Abstract:

The issue of apical periodontitis has received considerable critical attention. Bacteria is integrated into communities, attached to surfaces and consequently form biofilm. The biofilm structure provides bacteria with a series protection skills against, antimicrobial agents and enhances pathogenicity (e.g. apical periodontitis). Sodium hypochlorite (NaOCl) has become the irrigant of choice for elimination of bacteria from the root canal system based on its antimicrobial findings. The aim of the study was to investigate the effect of different agitation techniques on the efficacy of 2.5% NaOCl to eliminate the biofilm from the surface of the lateral canal using the residual biofilm, and removal rate of biofilm as outcome measures. The effect of canal complexity (lateral canal) on the efficacy of the irrigation procedure was also assessed. Forty root canal models (n = 10 per group) were manufactured using 3D printing and resin materials. Each model consisted of two halves of an 18 mm length root canal with apical size 30 and taper 0.06, and a lateral canal of 3 mm length, 0.3 mm diameter located at 3 mm from the apical terminus. E. faecalis biofilms were grown on the apical 3 mm and lateral canal of the models for 10 days in Brain Heart Infusion broth. Biofilms were stained using crystal violet for visualisation. The model halves were reassembled, attached to an apparatus and tested under a fluorescence microscope. Syringe and needle irrigation protocol was performed using 9 mL of 2.5% NaOCl irrigant for 60 seconds. The irrigant was either left stagnant in the canal or activated for 30 seconds using manual (gutta-percha), sonic and ultrasonic methods. Images were then captured every second using an external camera. The percentages of residual biofilm were measured using image analysis software. The data were analysed using generalised linear mixed models. The greatest removal was associated with the ultrasonic group (66.76%) followed by sonic (45.49%), manual (43.97%), and passive irrigation group (control) (38.67%) respectively. No marked reduction in the efficiency of NaOCl to remove biofilm was found between the simple and complex anatomy models (p = 0.098). The removal efficacy of NaOCl on the biofilm was limited to the 1 mm level of the lateral canal. The agitation of NaOCl results in better penetration of the irrigant into the lateral canals. Ultrasonic agitation of NaOCl improved the removal of bacterial biofilm.

Keywords: 3D printing, biofilm, root canal irrigation, sodium hypochlorite

Procedia PDF Downloads 205
403 Commercial Winding for Superconducting Cables and Magnets

Authors: Glenn Auld Knierim

Abstract:

Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.

Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable

Procedia PDF Downloads 116
402 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model

Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho

Abstract:

Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.

Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem

Procedia PDF Downloads 257
401 The Phenomenology in the Music of Debussy through Inspiration of Western and Oriental Culture

Authors: Yu-Shun Elisa Pong

Abstract:

Music aesthetics related to phenomenology is rarely discussed and still in the ascendant while multi-dimensional discourses of philosophy were emerged to be an important trend in the 20th century. In the present study, a basic theory of phenomenology from Edmund Husserl (1859-1938) is revealed and discussed followed by the introduction of intentionality concepts, eidetic reduction, horizon, world, and inter-subjectivity issues. Further, phenomenology of music and general art was brought to attention by the introduction of Roman Ingarden’s The Work of Music and the Problems of its Identity (1933) and Mikel Dufrenne’s The Phenomenology of Aesthetic Experience (1953). Finally, Debussy’s music will be analyzed and discussed from the perspective of phenomenology. Phenomenology is not so much a methodology or analytics rather than a common belief. That is, as much as possible to describe in detail the different human experience, relative to the object of purpose. Such idea has been practiced in various guises for centuries, only till the early 20th century Phenomenology was better refined through the works of Husserl, Heidegger, Sartre, Merleau-Ponty and others. Debussy was born in an age when the Western society began to accept the multi-cultural baptism. With his unusual sensitivity to the oriental culture, Debussy has presented considerable inspiration, absorption, and echo in his music works. In fact, his relationship with nature is far from echoing the idea of Chinese ancient literati and nature. Although he is not the first composer to associate music with human and nature, the unique quality and impact of his works enable him to become a significant figure in music aesthetics. Debussy’s music tried to develop a quality analogous of nature, and more importantly, based on vivid life experience and artistic transformation to achieve the realm of pure art. Such idea that life experience comes before artwork, either clear or vague, simple or complex, was later presented abstractly in his late works is still an interesting subject worth further discussion. Debussy’s music has existed for more than or close to a century. It has received musicology researcher’s attention as much as other important works in the history of Western music. Among the pluralistic discussion about Debussy’s art and ideas, phenomenological aesthetics has enlightened new ideas and view angles to relook his great works and even gave some previous arguments legitimacy. Overall, this article provides a new insight of Debussy’s music from phenomenological exploration and it is believed phenomenology would be an important pathway in the research of the music aesthetics.

Keywords: Debussy's music, music esthetics, oriental culture, phenomenology

Procedia PDF Downloads 235
400 Assessment of Bisphenol A and 17 α-Ethinyl Estradiol Bioavailability in Soils Treated with Biosolids

Authors: I. Ahumada, L. Ascar, C. Pedraza, J. Montecino

Abstract:

It has been found that the addition of biosolids to soil is beneficial to soil health, enriching soil with essential nutrient elements. Although this sludge has properties that allow for the improvement of the physical features and productivity of agricultural and forest soils and the recovery of degraded soils, they also contain trace elements, organic trace and pathogens that can cause damage to the environment. The application of these biosolids to land without the total reclamation and the treated wastewater can transfer these compounds into terrestrial and aquatic environments, giving rise to potential accumulation in plants. The general aim of this study was to evaluate the bioavailability of bisphenol A (BPA), and 17 α-ethynyl estradiol (EE2) in a soil-biosolid system using wheat (Triticum aestivum) plant assays and a predictive extraction method using a solution of hydroxypropyl-β-cyclodextrin (HPCD) to determine if it is a reliable surrogate for this bioassay. Two soils were obtained from the central region of Chile (Lo Prado and Chicauma). Biosolids were obtained from a regional wastewater treatment plant. The soils were amended with biosolids at 90 Mg ha-1. Soils treated with biosolids, spiked with 10 mgkg-1 of the EE2 and 15 mgkg-1 and 30 mgkg-1of BPA were also included. The BPA, and EE2 concentration were determined in biosolids, soils and plant samples through ultrasound assisted extraction, solid phase extraction (SPE) and gas chromatography coupled to mass spectrometry determination (GC/MS). The bioavailable fraction found of each one of soils cultivated with wheat plants was compared with results obtained through a cyclodextrin biosimulator method. The total concentration found in biosolid from a treatment plant was 0.150 ± 0.064 mgkg-1 and 12.8±2.9 mgkg-1 of EE2 and BPA respectively. BPA and EE2 bioavailability is affected by the organic matter content and the physical and chemical properties of the soil. The bioavailability response of both compounds in the two soils varied with the EE2 and BPA concentration. It was observed in the case of EE2, the bioavailability in wheat plant crops contained higher concentrations in the roots than in the shoots. The concentration of EE2 increased with increasing biosolids rate. On the other hand, for BPA, a higher concentration was found in the shoot than the roots of the plants. The predictive capability the HPCD extraction was assessed using a simple linear correlation test, for both compounds in wheat plants. The correlation coefficients for the EE2 obtained from the HPCD extraction with those obtained from the wheat plants were r= 0.99 and p-value ≤ 0.05. On the other hand, in the case of BPA a correlation was not found. Therefore, the methodology was validated with respect to wheat plants bioassays, only in the EE2 case. Acknowledgments: The authors thank FONDECYT 1150502.

Keywords: emerging compounds, bioavailability, biosolids, endocrine disruptors

Procedia PDF Downloads 114
399 Cancer Survivor’s Adherence to Healthy Lifestyle Behaviours; Meeting the World Cancer Research Fund/American Institute of Cancer Research Recommendations, a Systematic Review and Meta-Analysis

Authors: Daniel Nigusse Tollosa, Erica James, Alexis Hurre, Meredith Tavener

Abstract:

Introduction: Lifestyle behaviours such as healthy diet, regular physical activity and maintaining a healthy weight are essential for cancer survivors to improve the quality of life and longevity. However, there is no study that synthesis cancer survivor’s adherence to healthy lifestyle recommendations. The purpose of this review was to collate existing data on the prevalence of adherence to healthy behaviours and produce the pooled estimate among adult cancer survivors. Method: Multiple databases (Embase, Medline, Scopus, Web of Science and Google Scholar) were searched for relevant articles published since 2007, reporting cancer survivors adherence to more than two lifestyle behaviours based on the WCRF/AICR recommendations. The pooled prevalence of adherence to single and multiple behaviours (operationalized as adherence to more than 75% (3/4) of health behaviours included in a particular study) was calculated using a random effects model. Subgroup analysis adherence to multiple behaviours was undertaken corresponding to the mean survival years and year of publication. Results: A total of 3322 articles were generated through our search strategies. Of these, 51 studies matched our inclusion criteria, which presenting data from 2,620,586 adult cancer survivors. The highest prevalence of adherence was observed for smoking (pooled estimate: 87%, 95% CI: 85%, 88%) and alcohol intake (pooled estimate 83%, 95% CI: 81%, 86%), and the lowest was for fiber intake (pooled estimate: 31%, 95% CI: 21%, 40%). Thirteen studies were reported the proportion of cancer survivors (all used a simple summative index method) to multiple healthy behaviours, whereby the prevalence of adherence was ranged from 7% to 40% (pooled estimate 23%, 95% CI: 17% to 30%). Subgroup analysis suggest that short-term survivors ( < 5 years survival time) had relatively a better adherence to multiple behaviours (pooled estimate: 31%, 95% CI: 27%, 35%) than long-term ( > 5 years survival time) cancer survivors (pooled estimate: 25%, 95% CI: 14%, 36%). Pooling of estimates according to the year of publication (since 2007) also suggests an increasing trend of adherence to multiple behaviours over time. Conclusion: Overall, the adherence to multiple lifestyle behaviors was poor (not satisfactory), and relatively, it is a major concern for long-term than the short-term cancer survivor. Cancer survivors need to obey with healthy lifestyle recommendations related to physical activity, fruit and vegetable, fiber, red/processed meat and sodium intake.

Keywords: adherence, lifestyle behaviours, cancer survivors, WCRF/AICR

Procedia PDF Downloads 157
398 Towards Creative Movie Title Generation Using Deep Neural Models

Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie

Abstract:

Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.

Keywords: creativity, deep machine learning, natural language generation, movies

Procedia PDF Downloads 301
397 Bartlett Factor Scores in Multiple Linear Regression Equation as a Tool for Estimating Economic Traits in Broilers

Authors: Oluwatosin M. A. Jesuyon

Abstract:

In order to propose a simpler tool that eliminates the age-long problems associated with the traditional index method for selection of multiple traits in broilers, the Barttlet factor regression equation is being proposed as an alternative selection tool. 100 day-old chicks each of Arbor Acres (AA) and Annak (AN) broiler strains were obtained from two rival hatcheries in Ibadan Nigeria. These were raised in deep litter system in a 56-day feeding trial at the University of Ibadan Teaching and Research Farm, located in South-west Tropical Nigeria. The body weight and body dimensions were measured and recorded during the trial period. Eight (8) zoometric measurements namely live weight (g), abdominal circumference, abdominal length, breast width, leg length, height, wing length and thigh circumference (all in cm) were recorded randomly from 20 birds within strain, at a fixed time on the first day of the new week respectively with a 5-kg capacity Camry scale. These records were analyzed and compared using completely randomized design (CRD) of SPSS analytical software, with the means procedure, Factor Scores (FS) in stepwise Multiple Linear Regression (MLR) procedure for initial live weight equations. Bartlett Factor Score (BFS) analysis extracted 2 factors for each strain, termed Body-length and Thigh-meatiness Factors for AA, and; Breast Size and Height Factors for AN. These derived orthogonal factors assisted in deducing and comparing trait-combinations that best describe body conformation and Meatiness in experimental broilers. BFS procedure yielded different body conformational traits for the two strains, thus indicating the different economic traits and advantages of strains. These factors could be useful as selection criteria for improving desired economic traits. The final Bartlett Factor Regression equations for prediction of body weight were highly significant with P < 0.0001, R2 of 0.92 and above, VIF of 1.00, and DW of 1.90 and 1.47 for Arbor Acres and Annak respectively. These FSR equations could be used as a simple and potent tool for selection during poultry flock improvement, it could also be used to estimate selection index of flocks in order to discriminate between strains, and evaluate consumer preference traits in broilers.

Keywords: alternative selection tool, Bartlet factor regression model, consumer preference trait, linear and body measurements, live body weight

Procedia PDF Downloads 177
396 Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution

Authors: D. F. Carvalho, A. O. Uscamayta, J. C. Guerrero, H. F. Oliveira, P. M. Azevedo-Marques

Abstract:

The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services.

Keywords: radiotherapy, image processing, DICOM RT, Treatment Planning System (TPS)

Procedia PDF Downloads 269
395 Finite Element Molecular Modeling: A Structural Method for Large Deformations

Authors: A. Rezaei, M. Huisman, W. Van Paepegem

Abstract:

Atomic interactions in molecular systems are mainly studied by particle mechanics. Nevertheless, researches have also put on considerable effort to simulate them using continuum methods. In early 2000, simple equivalent finite element models have been developed to study the mechanical properties of carbon nanotubes and graphene in composite materials. Afterward, many researchers have employed similar structural simulation approaches to obtain mechanical properties of nanostructured materials, to simplify interface behavior of fiber-reinforced composites, and to simulate defects in carbon nanotubes or graphene sheets, etc. These structural approaches, however, are limited to small deformations due to complicated local rotational coordinates. This article proposes a method for the finite element simulation of molecular mechanics. For ease in addressing the approach, here it is called Structural Finite Element Molecular Modeling (SFEMM). SFEMM method improves the available structural approaches for large deformations, without using any rotational degrees of freedom. Moreover, the method simulates molecular conformation, which is a big advantage over the previous approaches. Technically, this method uses nonlinear multipoint constraints to simulate kinematics of the atomic multibody interactions. Only truss elements are employed, and the bond potentials are implemented through constitutive material models. Because the equilibrium bond- length, bond angles, and bond-torsion potential energies are intrinsic material parameters, the model is independent of initial strains or stresses. In this paper, the SFEMM method has been implemented in ABAQUS finite element software. The constraints and material behaviors are modeled through two Fortran subroutines. The method is verified for the bond-stretch, bond-angle and bond-torsion of carbon atoms. Furthermore, the capability of the method in the conformation simulation of molecular structures is demonstrated via a case study of a graphene sheet. Briefly, SFEMM builds up a framework that offers more flexible features over the conventional molecular finite element models, serving the structural relaxation modeling and large deformations without incorporating local rotational degrees of freedom. Potentially, the method is a big step towards comprehensive molecular modeling with finite element technique, and thereby concurrently coupling an atomistic domain to a solid continuum domain within a single finite element platform.

Keywords: finite element, large deformation, molecular mechanics, structural method

Procedia PDF Downloads 128
394 Listening to Circles, Playing Lights: A Study of Cross-Modal Perception in Music

Authors: Roni Granot, Erica Polini

Abstract:

Music is often described in terms of non-auditory adjectives such as a rising melody, a bright sound, or a zigzagged contour. Such cross modal associations have been studied with simple isolated musical parameters, but only rarely in rich musical contexts. The current study probes cross sensory associations with polarity based dimensions by means of pairings of 10 adjectives: blunt-sharp, relaxed-tense, heavy-light, low (in space)-high, low (pitch)-high, big-small, hard-soft, active-passive, bright-dark, sad-happy. 30 participants (randomly assigned to one of two groups) were asked to rate one of 27 short saxophone improvisations on a 1 to 6 scale where 1 and six correspond to the opposite pole of each dimension. The 27 improvisations included three exemplars for each of three dimensions (size, brightness, sharpness), played by three different players. Here we focus on the question of whether ratings of scales corresponding with the musical dimension were consistently rated as such (e.g. music improvised to represent a white circle rated as bright in contrast with music improvised to represent a dark circle rated as dark). Overall the average scores by dimension showed an upward trend in the equivalent verbal scale, with a low rating for small, bright and sharp musical improvisations and higher scores for large, dark and blunt improvisations. Friedman tests indicate a statistically significant difference for brightness (χ2 (2) = 19.704, p = .000) and sharpness dimensions (χ2 (2) = 15.750, p = .000), but not for size (χ2 (2) = 1.444, p = .486). Post hoc analysis with Wilcoxon signed-rank tests within the brightness dimension, show significant differences among all possible parings resulted in significant differences: the rankings of 'bright' and 'dark' (Z = -3.310, p = .001), of 'bright' and 'medium' (Z = -2.438, p = .015) and of 'dark' and 'medium' music (Z = -2.714, p = .007); but only differences between the extreme contrasts within the sharpness dimension : 'sharp' and 'blunt' music (Z = -3.147, p = .002) and between 'sharp' and 'medium' music rated on the sharpness scale (Z = - 3.054, p = .002), but not between 'medium' and 'blunt' music (Z = -.982, p = .326). In summary our study suggests a privileged link between music and the perceptual and semantic domain of brightness. In contrast, size seems to be very difficult to convey in music, whereas sharpness seems to be mapped onto the two extremes (sharp vs. blunt) rather than continuously. This is nicely reflected in the musical literature in titles and texts which stress the association between music and concepts of light or darkness rather than sharpness or size.

Keywords: audiovisual, brightness, cross-modal perception, cross-sensory correspondences, size, visual angularity

Procedia PDF Downloads 183
393 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process

Authors: Johannes Gantner, Michael Held, Matthias Fischer

Abstract:

The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.

Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation

Procedia PDF Downloads 259
392 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions

Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams

Abstract:

The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.

Keywords: architecture, central pavilions, classicism, machine learning

Procedia PDF Downloads 113
391 Nanoimprinted-Block Copolymer-Based Porous Nanocone Substrate for SERS Enhancement

Authors: Yunha Ryu, Kyoungsik Kim

Abstract:

Raman spectroscopy is one of the most powerful techniques for chemical detection, but the low sensitivity originated from the extremely small cross-section of the Raman scattering limits the practical use of Raman spectroscopy. To overcome this problem, Surface Enhanced Raman Scattering (SERS) has been intensively studied for several decades. Because the SERS effect is mainly induced from strong electromagnetic near-field enhancement as a result of localized surface plasmon resonance of metallic nanostructures, it is important to design the plasmonic structures with high density of electromagnetic hot spots for SERS substrate. One of the useful fabrication methods is using porous nanomaterial as a template for metallic structure. Internal pores on a scale of tens of nanometers can be strong EM hotspots by confining the incident light. Also, porous structures can capture more target molecules than non-porous structures in a same detection spot thanks to the large surface area. Herein we report the facile fabrication method of porous SERS substrate by integrating solvent-assisted nanoimprint lithography and selective etching of block copolymer. We obtained nanostructures with high porosity via simple selective etching of the one microdomain of the diblock copolymer. Furthermore, we imprinted of the nanocone patterns into the spin-coated flat block copolymer film to make three-dimensional SERS substrate for the high density of SERS hot spots as well as large surface area. We used solvent-assisted nanoimprint lithography (SAIL) to reduce the fabrication time and cost for patterning BCP film by taking advantage of a solvent which dissolves both polystyrenre and poly(methyl methacrylate) domain of the block copolymer, and thus block copolymer film was molded under the low temperature and atmospheric pressure in a short time. After Ag deposition, we measured Raman intensity of dye molecules adsorbed on the fabricated structure. Compared to the Raman signals of Ag coated solid nanocone, porous nanocone showed 10 times higher Raman intensity at 1510 cm(-1) band. In conclusion, we fabricated porous metallic nanocone arrays with high density electromagnetic hotspots by templating nanoimprinted diblock copolymer with selective etching and demonstrated its capability as an effective SERS substrate.

Keywords: block copolymer, porous nanostructure, solvent-assisted nanoimprint, surface-enhanced Raman spectroscopy

Procedia PDF Downloads 593