Search results for: perceptual linear prediction (PLP’s)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5353

Search results for: perceptual linear prediction (PLP’s)

253 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography

Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld

Abstract:

With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.

Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry

Procedia PDF Downloads 285
252 A Comparison of Three Different Modalities in Improving Oral Hygiene in Adult Orthodontic Patients: An Open-Label Randomized Controlled Trial

Authors: Umair Shoukat Ali, Rashna Hoshang Sukhia, Mubassar Fida

Abstract:

Introduction: The objective of the study was to compare outcomes in terms of Bleeding index (BI), Gingival Index (GI), and Orthodontic Plaque Index (OPI) with video graphics and plaque disclosing tablets (PDT) versus verbal instructions in adult orthodontic patients undergoing fixed appliance treatment (FAT). Materials and Methods: Adult orthodontic patients have recruited from outpatient orthodontic clinics who fulfilled the inclusion criteria and were randomly allocated to three groups i.e., video, PDT, and verbal groups. We included patients undergoing FAT for six months of both genders with all teeth bonded mesial to first molars having no co-morbid conditions such as rheumatic fever and diabetes mellitus. Subjects who had gingivitis as assessed by Bleeding Index (BI), Gingival Index (GI), and Orthodontic Plaque Index (OPI) were recruited. We excluded subjects having > 2 mm of clinical attachment loss, pregnant and lactating females, any history of periodontal therapy within the last six months, and any consumption of antibiotics or anti-inflammatory drugs within the last one month. Pre- and post-interventional measurements were taken at two intervals only for BI, GI, and OPI. The primary outcome of this trial was to evaluate the mean change in the BI, GI, and OPI in the three study groups. A computer-generated randomization list was used to allocate subjects to one of the three study groups using a random permuted block sampling of 6 and 9 to randomize the samples. No blinding of the investigator or the participants was performed. Results: A total of 99 subjects were assessed for eligibility, out of which 96 participants were randomized as three of the participants declined to be part of this trial. This resulted in an equal number of participants (32) that were analyzed in all three groups. The mean change in the oral hygiene indices score was assessed, and we found no statistically significant difference among the three interventional groups. Pre- and post-interventional results showed statistically significant improvement in the oral hygiene indices for the video and PDT groups. No statistically significant difference for age, gender, and education level on oral hygiene indices were found. Simple linear regression showed that the video group produced significantly higher mean OPI change as compared to other groups. No harm was observed during the trial. Conclusions: Visual aids performed better as compared to the verbal group. Gender, age, and education level had no statistically significant impact on the oral hygiene indices. Longer follow-ups will be required to see the long-term effects of these interventions. Trial Registration: NCT04386421 Funding: Aga Khan University and Hospital (URC 183022)

Keywords: oral hygiene, orthodontic treatment, adults, randomized clinical trial

Procedia PDF Downloads 95
251 Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre of Visual Arts in the Context of Agile, Lean and Hybrid Project Management Approaches

Authors: Maria Ledinskaya

Abstract:

This paper examines the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts in the context of Agile, Lean, and Hybrid project management. It is part case study and part literature review. To date, relatively little has been written about non-traditional project management approaches in heritage conservation. This paper seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation, by referencing their practical application on a recent museum-based conservation project. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre for Visual Arts by private collectors Michael and Joyce Morris. The first part introduces the chronological timeline and key elements of the project. It describes a medium-size conservation project of moderate complexity, which was planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown condition and materials, unconfirmed budget. The project was also impacted by the unknown unknowns of the COVID-19 pandemic, such as indeterminate lockdowns, and the need to accommodate social distancing and remote communications. The author, a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Collection Conservation Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. Subsequent sections examine the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment, due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Collection Conservation Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, as well as the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics, particularly with respect to change management, bespoke ethics, shared decision-making, and value-based cost-benefit conservation strategy. The author concludes that the Morris Collection Conservation Project had multiple Agile and Lean features which were instrumental to the successful delivery of the project. These key features are identified as distributed decision making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point largely in favour of a Hybrid model which combines traditional and alternative project processes and tools to suit the specific needs of the project.

Keywords: project management, conservation, waterfall, agile, lean, hybrid

Procedia PDF Downloads 79
250 Safety Assessment of Traditional Ready-to-Eat Meat Products Vended at Retail Outlets in Kebbi and Sokoto States, Nigeria

Authors: M. I. Ribah, M. Jibir, Y. A. Bashar, S. S. Manga

Abstract:

Food safety is a significant and growing public health problem in the world and Nigeria as a developing country, since food-borne diseases are important contributors to the huge burden of sickness and death of humans. In Nigeria, traditional ready-to-eat meat products (RTE-MPs) like balangu, tsire, guru and dried meat products like kilishi, dambun nama, banda, were reported to be highly appreciated because of their eating qualities. The consumption of these products was considered as safe due to the treatments that are usually involved during their production process. However, during processing and handling, the products could be contaminated by pathogens that could cause food poisoning. Therefore, a hazard identification for pathogenic bacteria on some traditional RTE-MPs was conducted in Kebbi and Sokoto States, Nigeria. A total of 116 RTE-MPs (balangu-38, kilishi-39 and tsire-39) samples were obtained from retail outlets and analyzed using standard cultural microbiological procedures in general and selective enrichment media to isolate the target pathogens. A six-fold serial dilution was prepared and using the pour plating method, colonies were counted. Serial dilutions were selected based on the prepared pre-labeled Petri dishes for each sample. A volume of 10-12 ml of molten Nutrient agar cooled to 42-45°C was poured into each Petri dish and 1 ml each from dilutions of 102, 104 and 106 for every sample was respectively poured on a pre-labeled Petri plate after which colonies were counted. The isolated pathogens were identified and confirmed after series of biochemical tests. Frequencies and percentages were used to describe the presence of pathogens. The General Linear Model was used to analyze data on pathogen presence according to RTE-MPs and means were separated using the Tukey test at 0.05 confidence level. Of the 116 RTE-MPs samples collected, 35 (30.17%) samples were found to be contaminated with some tested pathogens. Prevalence results showed that Escherichia coli, salmonella and Staphylococcus aureus were present in the samples. Mean total bacterial count was 23.82×106 cfu/g. The frequency of individual pathogens isolated was; Staphylococcus aureus 18 (15.51%), Escherichia coli 12 (10.34%) and Salmonella 5 (4.31%). Also, among the RTE-MPs tested, the total bacterial counts were found to differ significantly (P < 0.05), with 1.81, 2.41 and 2.9×104 cfu/g for tsire, kilishi, and balangu, respectively. The study concluded that the presence of pathogenic bacteria in balangu could pose grave health risks to consumers, and hence, recommended good manufacturing practices in the production of balangu to improve the products’ safety.

Keywords: ready-to-eat meat products, retail outlets, public health, safety assessment

Procedia PDF Downloads 105
249 A Strength Weaknesses Opportunities and Threats Analysis of Socialisation Externalisation Combination and Internalisation Modes in Knowledge Management Practice: A Systematic Review of Literature

Authors: Aderonke Olaitan Adesina

Abstract:

Background: The paradigm shift to knowledge, as the key to organizational innovation and competitive advantage, has made the management of knowledge resources in organizations a mandate. A key component of the knowledge management (KM) cycle is knowledge creation, which is researched to be the result of the interaction between explicit and tacit knowledge. An effective knowledge creation process requires the use of the right model. The SECI (Socialisation, Externalisation, Combination, and Internalisation) model, proposed in 1995, is attested to be a preferred model of choice for knowledge creation activities. The model has, however, been criticized by researchers, who raise their concern, especially about its sequential nature. Therefore, this paper reviews extant literature on the practical application of each mode of the SECI model, from 1995 to date, with a view to ascertaining the relevance in modern-day KM practice. The study will establish the trends of use, with regards to the location and industry of use, and the interconnectedness of the modes. The main research question is, for organizational knowledge creation activities, is the SECI model indeed linear and sequential? In other words, does the model need to be reviewed in today’s KM practice? The review will generate a compendium of the usage of the SECI modes and propose a framework of use, based on the strength weaknesses opportunities and threats (SWOT) findings of the study. Method: This study will employ the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate the usage and SWOT of the modes, in order to ascertain the success, or otherwise, of the sequential application of the modes in practice from 1995 to 2019. To achieve the purpose, four databases will be explored to search for open access, peer-reviewed articles from 1995 to 2019. The year 1995 is chosen as the baseline because it was the year the first paper on the SECI model was published. The study will appraise relevant peer-reviewed articles under the search terms: SECI (or its synonym, knowledge creation theory), socialization, externalization, combination, and internalization in the title, abstract, or keywords list. This review will include only empirical studies of knowledge management initiatives in which the SECI model and its modes were used. Findings: It is expected that the study will highlight the practical relevance of each mode of the SECI model, the linearity or not of the model, the SWOT in each mode. Concluding Statement: Organisations can, from the analysis, determine the modes of emphasis for their knowledge creation activities. It is expected that the study will support decision making in the choice of the SECI model as a strategy for the management of organizational knowledge resources, and in appropriating the SECI model, or its remodeled version, as a theoretical framework in future KM research.

Keywords: combination, externalisation, internalisation, knowledge management, SECI model, socialisation

Procedia PDF Downloads 316
248 Linkages between Innovation Policies and SMEs' Innovation Activities: Empirical Evidence from 15 Transition Countries

Authors: Anita Richter

Abstract:

Innovation is one of the key foundations of competitive advantage, generating growth and welfare worldwide. Consequently, all firms should innovate to bring new ideas to the market. Innovation is a vital growth driver, particularly for transition countries to move towards knowledge-based, high-income economies. However, numerous barriers, such as financial, regulatory or infrastructural constraints prevent, in particular, new and small firms in transition countries from innovating. Thus SMEs’ innovation output may benefit substantially from government support. This research paper aims to assess the effect of government interventions on innovation activities in SMEs in emerging countries. Until now academic research related to the innovation policies focused either on single country and/or high-income countries assessments and less on cross-country and/or low and middle-income countries. Therefore the paper seeks to close the research gap by providing empirical evidence from 8,500 firms in 15 transition countries (Eastern Europe, South Caucasus, South East Europe, Middle East and North Africa). Using firm-level data from the Business Environment and Enterprise Performance Survey of the World Bank and EBRD and policy data from the SME Policy Index of the OECD, the paper investigates how government interventions affect SME’s likelihood of investing in any technological and non-technological innovation. Using the Standard Linear Regression, the impact of government interventions on SMEs’ innovation output and R&D activities is measured. The empirical analysis suggests that a firm’s decision to invest into innovative activities is sensitive to government interventions. A firm’s likelihood to invest into innovative activities increases by 3% to 8%, if the innovation eco-system noticeably improves (measured by an increase of 1 level in the SME Policy Index). At the same time, a better eco-system encourages SMEs to invest more in R&D. Government reforms in establishing a dedicated policy framework (IP legislation), institutional infrastructure (science and technology parks, incubators) and financial support (public R&D grants, innovation vouchers) are particularly relevant to stimulate innovation performance in SMEs. Particular segments of the SME population, namely micro and manufacturing firms, are more likely to benefit from an increased innovation framework conditions. The marginal effects are particularly strong on product innovation, process innovation, and marketing innovation, but less on management innovation. In conclusion, government interventions supporting innovation will likely lead to higher innovation performance of SMEs. They increase productivity at both firm and country level, which is a vital step in transitioning towards knowledge-based market economies.

Keywords: innovation, research and development, government interventions, economic development, small and medium-sized enterprises, transition countries

Procedia PDF Downloads 303
247 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems

Authors: A. G. Akhundov

Abstract:

Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.

Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning

Procedia PDF Downloads 166
246 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents

Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat

Abstract:

This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.

Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents

Procedia PDF Downloads 39
245 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 82
244 Self-Supervised Learning for Hate-Speech Identification

Authors: Shrabani Ghosh

Abstract:

Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.

Keywords: attention learning, language model, offensive language detection, self-supervised learning

Procedia PDF Downloads 83
243 Fuel Cells Not Only for Cars: Technological Development in Railways

Authors: Marita Pigłowska, Beata Kurc, Paweł Daszkiewicz

Abstract:

Railway vehicles are divided into two groups: traction (powered) vehicles and wagons. The traction vehicles include locomotives (line and shunting), railcars (sometimes referred to as railbuses), and multiple units (electric and diesel), consisting of several or a dozen carriages. In vehicles with diesel traction, fuel energy (petrol, diesel, or compressed gas) is converted into mechanical energy directly in the internal combustion engine or via electricity. In the latter case, the combustion engine generator produces electricity that is then used to drive the vehicle (diesel-electric drive or electric transmission). In Poland, such a solution dominates both in heavy linear and shunting locomotives. The classic diesel drive is available for the lightest shunting locomotives, railcars, and passenger diesel multiple units. Vehicles with electric traction do not have their own source of energy -they use pantographs to obtain electricity from the traction network. To determine the competitiveness of the hydrogen propulsion system, it is essential to understand how it works. The basic elements of the construction of a railway vehicle drive system that uses hydrogen as a source of traction force are fuel cells, batteries, fuel tanks, traction motors as well as main and auxiliary converters. The compressed hydrogen is stored in tanks usually located on the roof of the vehicle. This resource is supplemented with the use of specialized infrastructure while the vehicle is stationary. Hydrogen is supplied to the fuel cell, where it oxidizes. The effect of this chemical reaction is electricity and water (in two forms -liquid and water vapor). Electricity is stored in batteries (so far, lithium-ion batteries are used). Electricity stored in this way is used to drive traction motors and supply onboard equipment. The current generated by the fuel cell passes through the main converter, whose task is to adjust it to the values required by the consumers, i.e., batteries and the traction motor. The work will attempt to construct a fuel cell with unique electrodes. This research is a trend that connects industry with science. The first goal will be to obtain hydrogen on a large scale in tube furnaces, to thoroughly analyze the obtained structures (IR), and to apply the method in fuel cells. The second goal is to create low-energy energy storage and distribution station for hydrogen and electric vehicles. The scope of the research includes obtaining a carbon variety and obtaining oxide systems on a large scale using a tubular furnace and then supplying vehicles. Acknowledgments: This work is supported by the Polish Ministry of Science and Education, project "The best of the best! 4.0", number 0911/MNSW/4968 – M.P. and grant 0911/SBAD/2102—B.K.

Keywords: railway, hydrogen, fuel cells, hybrid vehicles

Procedia PDF Downloads 161
242 Finite Element Modeling and Analysis of Reinforced Concrete Coupled Shear Walls Strengthened with Externally Bonded Carbon Fiber Reinforced Polymer Composites

Authors: Sara Honarparast, Omar Chaallal

Abstract:

Reinforced concrete (RC) coupled shear walls (CSWs) are very effective structural systems in resisting lateral loads due to winds and earthquakes and are particularly used in medium- to high-rise RC buildings. However, most of existing old RC structures were designed for gravity loads or lateral loads well below the loads specified in the current modern seismic international codes. These structures may behave in non-ductile manner due to poorly designed joints, insufficient shear reinforcement and inadequate anchorage length of the reinforcing bars. This has been the main impetus to investigate an appropriate strengthening method to address or attenuate the deficiencies of these structures. The objective of this paper is to twofold: (i) evaluate the seismic performance of existing reinforced concrete coupled shear walls under reversed cyclic loading; and (ii) investigate the seismic performance of RC CSWs strengthened with externally bonded (EB) carbon fiber reinforced polymer (CFRP) sheets. To this end, two CSWs were considered as follows: (a) the first one is representative of old CSWs and therefore was designed according to the 1941 National Building Code of Canada (NBCC, 1941) with conventionally reinforced coupling beams; and (b) the second one, representative of new CSWs, was designed according to modern NBCC 2015 and CSA/A23.3 2014 requirements with diagonally reinforced coupling beam. Both CSWs were simulated using ANSYS software. Nonlinear behavior of concrete is modeled using multilinear isotropic hardening through a multilinear stress strain curve. The elastic-perfectly plastic stress-strain curve is used to simulate the steel material. Bond stress–slip is modeled between concrete and steel reinforcement in conventional coupling beam rather than considering perfect bond to better represent the slip of the steel bars observed in the coupling beams of these CSWs. The old-designed CSW was strengthened using CFRP sheets bonded to the concrete substrate and the interface was modeled using an adhesive layer. The behavior of CFRP material is considered linear elastic up to failure. After simulating the loading and boundary conditions, the specimens are analyzed under reversed cyclic loading. The comparison of results obtained for the two unstrengthened CSWs and the one retrofitted with EB CFRP sheets reveals that the strengthening method improves the seismic performance in terms of strength, ductility, and energy dissipation capacity.

Keywords: carbon fiber reinforced polymer, coupled shear wall, coupling beam, finite element analysis, modern code, old code, strengthening

Procedia PDF Downloads 173
241 Familial Exome Sequencing to Decipher the Complex Genetic Basis of Holoprosencephaly

Authors: Artem Kim, Clara Savary, Christele Dubourg, Wilfrid Carre, Houda Hamdi-Roze, Valerie Dupé, Sylvie Odent, Marie De Tayrac, Veronique David

Abstract:

Holoprosencephaly (HPE) is a rare congenital brain malformation resulting from the incomplete separation of the two cerebral hemispheres. It is characterized by a wide phenotypic spectrum and a high degree of locus heterogeneity. Genetic defects in 16 genes have already been implicated in HPE, but account for only 30% of cases, suggesting that a large part of genetic factors remains to be discovered. HPE has been recently redefined as a complex multigenic disorder, requiring the joint effect of multiple mutational events in genes belonging to one or several developmental pathways. The onset of HPE may result from accumulation of the effects of multiple rare variants in functionally-related genes, each conferring a moderate increase in the risk of HPE onset. In order to decipher the genetic basis of HPE, unconventional patterns of inheritance involving multiple genetic factors need to be considered. The primary objective of this study was to uncover possible disease causing combinations of multiple rare variants underlying HPE by performing trio-based Whole Exome Sequencing (WES) of familial cases where no molecular diagnosis could be established. 39 families were selected with no fully-penetrant causal mutation in known HPE gene, no chromosomic aberrations/copy number variants and without any implication of environmental factors. As the main challenge was to identify disease-related variants among a large number of nonpathogenic polymorphisms detected by WES classical scheme, a novel variant prioritization approach was established. It combined WES filtering with complementary gene-level approaches: transcriptome-driven (RNA-Seq data) and clinically-driven (public clinical data) strategies. Briefly, a filtering approach was performed to select variants compatible with disease segregation, population frequency and pathogenicity prediction to identify an exhaustive list of rare deleterious variants. The exome search space was then reduced by restricting the analysis to candidate genes identified by either transcriptome-driven strategy (genes sharing highly similar expression patterns with known HPE genes during cerebral development) or clinically-driven strategy (genes associated to phenotypes of interest overlapping with HPE). Deeper analyses of candidate variants were then performed on a family-by-family basis. These included the exploration of clinical information, expression studies, variant characteristics, recurrence of mutated genes and available biological knowledge. A novel bioinformatics pipeline was designed. Applied to the 39 families, this final integrated workflow identified an average of 11 candidate variants per family. Most of candidate variants were inherited from asymptomatic parents suggesting a multigenic inheritance pattern requiring the association of multiple mutational events. The manual analysis highlighted 5 new strong HPE candidate genes showing recurrences in distinct families. Functional validations of these genes are foreseen.

Keywords: complex genetic disorder, holoprosencephaly, multiple rare variants, whole exome sequencing

Procedia PDF Downloads 178
240 Collaboration between Grower and Research Organisations as a Mechanism to Improve Water Efficiency in Irrigated Agriculture

Authors: Sarah J. C. Slabbert

Abstract:

The uptake of research as part of the diffusion or adoption of innovation by practitioners, whether individuals or organisations, has been a popular topic in agricultural development studies for many decades. In the classical, linear model of innovation theory, the innovation originates from an expert source such as a state-supported research organisation or academic institution. The changing context of agriculture led to the development of the agricultural innovation systems model, which recognizes innovation as a complex interaction between individuals and organisations, which include private industry and collective action organisations. In terms of this model, an innovation can be developed and adopted without any input or intervention from a state or parastatal research organisation. This evolution in the diffusion of agricultural innovation has put forward new challenges for state or parastatal research organisations, which have to demonstrate the impact of their research to the legislature or a regulatory authority: Unless the organisation and the research it produces cross the knowledge paths of the intended audience, there will be no awareness, no uptake and certainly no impact. It is therefore critical for such a research organisation to base its communication strategy on a thorough understanding of the knowledge needs, information sources and knowledge networks of the intended target audience. In 2016, the South African Water Research Commission (WRC) commissioned a study to investigate the knowledge needs, information sources and knowledge networks of Water User Associations and commercial irrigators with the aim of improving uptake of its research on efficient water use in irrigation. The first phase of the study comprised face-to-face interviews with the CEOs and Board Chairs of four Water User Associations along the Orange River in South Africa, and 36 commercial irrigation farmers from the same four irrigation schemes. Intermediaries who act as knowledge conduits to the Water User Associations and the irrigators were identified and 20 of them were subsequently interviewed telephonically. The study found that irrigators interact regularly with grower organisations such as SATI (South African Table Grape Industry) and SAPPA (South African Pecan Nut Association) and that they perceive these organisations as credible, trustworthy and reliable, within their limitations. State and parastatal research institutions, on the other hand, are associated with a range of negative attributes. As a result, the awareness of, and interest in, the WRC and its research on water use efficiency in irrigated agriculture are low. The findings suggest that a communication strategy that involves collaboration with these grower organisations would empower the WRC to participate much more efficiently and with greater impact in agricultural innovation networks. The paper will elaborate on the findings and discuss partnering frameworks and opportunities to manage perceptions and uptake.

Keywords: agricultural innovation systems, communication strategy, diffusion of innovation, irrigated agriculture, knowledge paths, research organisations, target audiences, water use efficiency

Procedia PDF Downloads 92
239 Using ANN in Emergency Reconstruction Projects Post Disaster

Authors: Rasha Waheeb, Bjorn Andersen, Rafa Shakir

Abstract:

Purpose The purpose of this study is to avoid delays that occur in emergency reconstruction projects especially in post disaster circumstances whether if they were natural or manmade due to their particular national and humanitarian importance. We presented a theoretical and practical concepts for projects management in the field of construction industry that deal with a range of global and local trails. This study aimed to identify the factors of effective delay in construction projects in Iraq that affect the time and the specific quality cost, and find the best solutions to address delays and solve the problem by setting parameters to restore balance in this study. 30 projects were selected in different areas of construction were selected as a sample for this study. Design/methodology/approach This study discusses the reconstruction strategies and delay in time and cost caused by different delay factors in some selected projects in Iraq (Baghdad as a case study).A case study approach was adopted, with thirty construction projects selected from the Baghdad region, of different types and sizes. Project participants from the case projects provided data about the projects through a data collection instrument distributed through a survey. Mixed approach and methods were applied in this study. Mathematical data analysis was used to construct models to predict delay in time and cost of projects before they started. The artificial neural networks analysis was selected as a mathematical approach. These models were mainly to help decision makers in construction project to find solutions to these delays before they cause any inefficiency in the project being implemented and to strike the obstacles thoroughly to develop this industry in Iraq. This approach was practiced using the data collected through survey and questionnaire data collection as information form. Findings The most important delay factors identified leading to schedule overruns were contractor failure, redesigning of designs/plans and change orders, security issues, selection of low-price bids, weather factors, and owner failures. Some of these are quite in line with findings from similar studies in other countries/regions, but some are unique to the Iraqi project sample, such as security issues and low-price bid selection. Originality/value we selected ANN’s analysis first because ANN’s was rarely used in project management , and never been used in Iraq to finding solutions for problems in construction industry. Also, this methodology can be used in complicated problems when there is no interpretation or solution for a problem. In some cases statistical analysis was conducted and in some cases the problem is not following a linear equation or there was a weak correlation, thus we suggested using the ANN’s because it is used for nonlinear problems to find the relationship between input and output data and that was really supportive.

Keywords: construction projects, delay factors, emergency reconstruction, innovation ANN, post disasters, project management

Procedia PDF Downloads 135
238 Development and Validation of a Rapid Turbidimetric Assay to Determine the Potency of Cefepime Hydrochloride in Powder Injectable Solution

Authors: Danilo F. Rodrigues, Hérida Regina N. Salgado

Abstract:

Introduction: The emergence of resistant microorganisms to a large number of clinically approved antimicrobials has been increasing, which restrict the options for the treatment of bacterial infections. As a strategy, drugs with high antimicrobial activities are in evidence. Stands out a class of antimicrobial, the cephalosporins, having as fourth generation cefepime (CEF) a semi-synthetic product which has activity against various Gram-positive bacteria (e.g. oxacillin resistant Staphylococcus aureus) and Gram-negative (e.g. Pseudomonas aeruginosa) aerobic. There are few studies in the literature regarding the development of microbiological methodologies for the analysis of this antimicrobial, so researches in this area are highly relevant to optimize the analysis of this drug in the industry and ensure the quality of the marketed product. The development of microbiological methods for the analysis of antimicrobials has gained strength in recent years and has been highlighted in relation to physicochemical methods, especially because they make possible to determine the bioactivity of the drug against a microorganism. In this context, the aim of this work was the development and validation of a microbiological method for quantitative analysis of CEF in powder lyophilized for injectable solution by turbidimetric assay. Method: For performing the method, Staphylococcus aureus ATCC 6538 IAL 2082 was used as the test microorganism and the culture medium chosen was the Casoy broth. The test was performed using temperature control (35.0 °C ± 2.0 °C) and incubated for 4 hours in shaker. The readings of the results were made at a wavelength of 530 nm through a spectrophotometer. The turbidimetric microbiological method was validated by determining the following parameters: linearity, precision (repeatability and intermediate precision), accuracy and robustness, according to ICH guidelines. Results and discussion: Among the parameters evaluated for method validation, the linearity showed results suitable for both statistical analyses as the correlation coefficients (r) that went 0.9990 for CEF reference standard and 0.9997 for CEF sample. The precision presented the following values 1.86% (intraday), 0.84% (interday) and 0.71% (between analyst). The accuracy of the method has been proven through the recovery test where the mean value obtained was 99.92%. The robustness was verified by the parameters changing volume of culture medium, brand of culture medium, incubation time in shaker and wavelength. The potency of CEF present in the samples of lyophilized powder for injectable solution was 102.46%. Conclusion: The turbidimetric microbiological method proposed for quantification of CEF in lyophilized powder for solution for injectable showed being fast, linear, precise, accurate and robust, being in accordance with all the requirements, which can be used in routine analysis of quality control in the pharmaceutical industry as an option for microbiological analysis.

Keywords: cefepime hydrochloride, quality control, turbidimetric assay, validation

Procedia PDF Downloads 337
237 Effect of Particle Size Variations on the Tribological Properties of Porcelain Waste Added Epoxy Composites

Authors: B. Yaman, G. Acikbas, N. Calis Acikbas

Abstract:

Epoxy based materials have advantages in tribological applications due to their unique properties such as light weight, self-lubrication capacity and wear resistance. On the other hand, their usage is often limited by their low load bearing capacity and low thermal conductivity values. In this study, it is aimed to improve tribological and also mechanical properties of epoxy by reinforcing with ceramic based porcelain waste. It is well-known that the reuse or recycling of waste materials leads to reduction in production costs, ease of manufacturing, saving energy, etc. From this perspective, epoxy and epoxy matrix composites containing 60wt% porcelain waste with different particle size in the range of below 90µm and 150-250µm were fabricated, and the effect of filler particle size on the mechanical and tribological properties was investigated. The microstructural characterization was carried out by scanning electron microscopy (SEM), and phase analysis was determined by X-ray diffraction (XRD). The Archimedes principle was used to measure the density and porosity of the samples. The hardness values were measured using Shore-D hardness, and bending tests were performed. Microstructural investigations indicated that porcelain particles were homogeneously distributed and no agglomerations were encountered in the epoxy resin. Mechanical test results showed that the hardness and bending strength were increased with increasing particle size related to low porosity content and well embedding to the matrix. Tribological behavior of these composites was evaluated in terms of friction, wear rates and wear mechanisms by ball-on-disk contact with dry and rotational sliding at room temperature against WC ball with a diameter of 3mm. Wear tests were carried out at room temperature (23–25°C) with a humidity of 40 ± 5% under dry-sliding conditions. The contact radius of cycles was set to 5 mm at linear speed of 30 cm/s for the geometry used in this study. In all the experiments, 3N of constant test load was applied at a frequency of 8 Hz and prolonged to 400m wear distance. The friction coefficient of samples was recorded online by the variation in the tangential force. The steady-state CoFs were changed in between 0,29-0,32. The dimensions of the wear tracks (depth and width) were measured as two-dimensional profiles by a stylus profilometer. The wear volumes were calculated by integrating these 2D surface areas over the diameter. Specific wear rates were computed by dividing the wear volume by the applied load and sliding distance. According to the experimental results, the use of porcelain waste in the fabrication of epoxy resin composites can be suggested to be potential materials due to allowing improved mechanical and tribological properties and also providing reduction in production cost.

Keywords: epoxy composites, mechanical properties, porcelain waste, tribological properties

Procedia PDF Downloads 178
236 Polarimetric Study of System Gelatin / Carboxymethylcellulose in the Food Field

Authors: Sihem Bazid, Meriem El Kolli, Aicha Medjahed

Abstract:

Proteins and polysaccharides are the two types of biopolymers most frequently used in the food industry to control the mechanical properties and structural stability and organoleptic properties of the products. The textural and structural properties of these two types of blend polymers depend on their interaction and their ability to form organized structures. From an industrial point of view, a better understanding of mixtures protein / polysaccharide is an important issue since they are already heavily involved in processed food. It is in this context that we have chosen to work on a model system composed of a fibrous protein mixture (gelatin)/anionic polysaccharide (sodium carboxymethylcellulose). Gelatin, one of the most popular biopolymers, is widely used in food, pharmaceutical, cosmetic and photographic applications, because of its unique functional and technological properties. Sodium Carboxymethylcellulose (NaCMC) is an anionic linear polysaccharide derived from cellulose. It is an important industrial polymer with a wide range of applications. The functional properties of this anionic polysaccharide can be modified by the presence of proteins with which it might interact. Another factor may also manage the interaction of protein-polysaccharide mixtures is the triple helix of the gelatin. Its complex synthesis method results in an extracellular assembly containing several levels. Collagen can be in a soluble state or associate into fibrils, which can associate in fiber. Each level corresponds to an organization recognized by the cellular and metabolic system. Gelatin allows this approach, the formation of gelatin gel has triple helical folding of denatured collagen chains, this gel has been the subject of numerous studies, and it is now known that the properties depend only on the rate of triple helices forming the network. Chemical modification of this system is quite controlled. Observe the dynamics of the triple helix may be relevant in understanding the interactions involved in protein-polysaccharides mixtures. Gelatin is central to any industrial process, understand and analyze the molecular dynamics induced by the triple helix in the transitions gelatin, can have great economic importance in all fields and especially the food. The goal is to understand the possible mechanisms involved depending on the nature of the mixtures obtained. From a fundamental point of view, it is clear that the protective effect of NaCMC on gelatin and conformational changes of the α helix are strongly influenced by the nature of the medium. Our goal is to minimize the maximum the α helix structure changes to maintain more stable gelatin and protect against denaturation that occurs during such conversion processes in the food industry. In order to study the nature of interactions and assess the properties of mixtures, polarimetry was used to monitor the optical parameters and to assess the rate of helicity gelatin.

Keywords: gelatin, sodium carboxymethylcellulose, interaction gelatin-NaCMC, the rate of helicity, polarimetry

Procedia PDF Downloads 287
235 Radioprotective Efficacy of Costus afer against the Radiation-Induced Hematology and Histopathology Damage in Mice

Authors: Idowu R. Akomolafe, Naven Chetty

Abstract:

Background: The widespread medical application of ionizing radiation has raised public concern about radiation exposure and, thus, associated cancer risk. The production of reactive oxygen species and free radicals as a result of radiation exposure can cause severe damage to deoxyribonucleic acid (DNA) of cells, thus leading to biological effect. Radiotherapy is an excellent modality in the treatment of cancerous cells, comes with a few challenges. A significant challenge is the exposure of healthy cells surrounding the tumour to radiation. The last few decades have witnessed lots of attention shifted to plants, herbs, and natural product as an alternative to synthetic compound for radioprotection. Thus, the study investigated the radioprotective efficacy of Costus afer against whole-body radiation-induced haematological, histopathological disorder in mice. Materials and Method: Fifty-four mice were randomly divided into nine groups. Animals were pretreated with the extract of Costus afer by oral gavage for six days before irradiation. Control: 6 mice received feed and water only; 6 mice received feed, water, and 3Gy; 6 mice received feed, water, and 6Gy; experimental: 6 mice received 250 mg/kg extract; 6 mice received 500 mg/kg extract; 6 mice received 250 mg/kg extract and 3Gy; 6 mice received 500 mg/kg extract and 3Gy; 6 mice received 250 mg/kg extract and 6Gy; 6 mice received 500 mg/kg extract and 6Gy in addition to feeding and water. The irradiation was done at the Radiotherapy and Oncology Department of Grey's Hospital using linear accelerator (LINAC). Thirty-six mice were sacrificed by cervical dislocation 48 hours after irradiation, and blood was collected for haematology tests. Also, the liver and kidney of the sacrificed mice were surgically removed for histopathology tests. The remaining eighteen (18) mice were used for mortality and survival studies. Data were analysed by one-way ANOVA, followed by Tukey's multiple comparison test. Results: Prior administration of Costus afer extract decreased the symptoms of radiation sickness and caused a significant delay in the mortality as demonstrated in the experimental mice. The first mortality was recorded on day-5 post irradiation, and this happened to the group E- that is, mice that received 6Gy but no extract. There was significant protection in the experimental mice, as demonstrated in the blood counts against hematopoietic and gastrointestinal damage when compared with the control. The protection was seen in the increase in blood counts of experimental animals and the number of survivor. The protection offered by Costus afer may be due to its ability to scavenge free radicals and restore gastrointestinal and bone marrow damage produced by radiation. Conclusions: The study has demonstrated that exposure of mice to radiation could cause modifications in the haematological and histopathological parameters of irradiated mice. However, the changes were relieved by the methanol extract of Costus afer, probably through its free radical scavenging and antioxidant properties.

Keywords: costus afer, hematological, mortality, radioprotection, radiotherapy

Procedia PDF Downloads 116
234 The Influence of Thermal Radiation and Chemical Reaction on MHD Micropolar Fluid in The Presence of Heat Generation/Absorption

Authors: Binyam Teferi

Abstract:

Numerical and theoretical analysis of mixed convection flow of magneto- hydrodynamics micropolar fluid with stretching capillary in the presence of thermal radiation, chemical reaction, viscous dissipation, and heat generation/ absorption have been studied. The non-linear partial differential equations of momentum, angular velocity, energy, and concentration are converted into ordinary differential equations using similarity transformations which can be solved numerically. The dimensionless governing equations are solved by using Runge Kutta fourth and fifth order along with the shooting method. The effect of physical parameters viz., micropolar parameter, unsteadiness parameter, thermal buoyancy parameter, concentration buoyancy parameter, Hartmann number, spin gradient viscosity parameter, microinertial density parameter, thermal radiation parameter, Prandtl number, Eckert number, heat generation or absorption parameter, Schmidt number and chemical reaction parameter on flow variables viz., the velocity of the micropolar fluid, microrotation, temperature, and concentration has been analyzed and discussed graphically. MATLAB code is used to analyze numerical and theoretical facts. From the simulation study, it can be concluded that an increment of micropolar parameter, Hartmann number, unsteadiness parameter, thermal and concentration buoyancy parameter results in decrement of velocity flow of micropolar fluid; microrotation of micropolar fluid decreases with an increment of micropolar parameter, unsteadiness parameter, microinertial density parameter, and spin gradient viscosity parameter; temperature profile of micropolar fluid decreases with an increment of thermal radiation parameter, Prandtl number, micropolar parameter, unsteadiness parameter, heat absorption, and viscous dissipation parameter; concentration of micropolar fluid decreases as unsteadiness parameter, Schmidt number and chemical reaction parameter increases. Furthermore, computational values of local skin friction coefficient, local wall coupled coefficient, local Nusselt number, and local Sherwood number for different values of parameters have been investigated. In this paper, the following important results are obtained; An increment of micropolar parameter and Hartmann number results in a decrement of velocity flow of micropolar fluid. Microrotation decreases with an increment of the microinertial density parameter. Temperature decreases with an increasing value of the thermal radiation parameter and viscous dissipation parameter. Concentration decreases as the values of Schmidt number and chemical reaction parameter increases. The coefficient of local skin friction is enhanced with an increase in values of both the unsteadiness parameter and micropolar parameter. Increasing values of unsteadiness parameter and micropolar parameter results in an increment of the local couple stress. An increment of values of unsteadiness parameter and thermal radiation parameter results in an increment of the rate of heat transfer. As the values of Schmidt number and unsteadiness parameter increases, Sherwood number decreases.

Keywords: thermal radiation, chemical reaction, viscous dissipation, heat absorption/ generation, similarity transformation

Procedia PDF Downloads 94
233 Willingness to Pay for Improvements of MSW Disposal: Views from Online Survey

Authors: Amornchai Challcharoenwattana, Chanathip Pharino

Abstract:

Rising amount of MSW every day, maximizing material diversions from landfills via recycling is a prefer method to land dumping. Characteristic of Thai MSW is classified as 40 -60 per cent compostable wastes while potentially recyclable materials in waste streams are composed of plastics, papers, glasses, and metals. However, rate of material recovery from MSW, excluding composting or biogas generation, in Thailand is still low. Thailand’s recycling rate in 2010 was only 20.5 per cent. Central government as well as local governments in Thailand have tried to curb this problem by charging some of MSW management fees at the users. However, the fee is often too low to promote MSW minimization. The objective of this paper is to identify levels of willingness-to-pay (WTP) for MSW recycling in different social structures with expected outcome of sustainable MSW managements for different town settlements to maximize MSW recycling pertaining to each town’s potential. The method of eliciting WTP is a payment card. The questionnaire was deployed using online survey during December 2012. Responses were categorized into respondents living in Bangkok, living in other municipality areas, or outside municipality area. The responses were analysed using descriptive statistics, and multiple linear regression analysis to identify relationships and factors that could influence high or low WTP. During the survey period, there were 168 filled questionnaires from total 689 visits. However, only 96 questionnaires could be usable. Among respondents in the usable questionnaires, 36 respondents lived in within the boundary of Bangkok Metropolitan Administration while 45 respondents lived in the chartered areas that were classified as other municipality but not in BMA. Most of respondents were well-off as 75 respondents reported positive monthly cash flow (77.32%), 15 respondents reported neutral monthly cash flow (15.46%) while 7 respondent reported negative monthly cash flow (7.22%). For WTP data including WTP of 0 baht with valid responses, ranking from the highest means of WTP to the lowest WTP of respondents by geographical locations for good MSW management were Bangkok (196 baht/month), municipalities (154 baht/month), and non-urbanized towns (111 baht/month). In-depth analysis was conducted to analyse whether there are additional room for further increase of MSW management fees from the current payment that each correspondent is currently paying. The result from multiple-regression analysis suggested that the following factors could impacts the increase or decrease of WTP: incomes, age, and gender. Overall, the outcome of this study suggests that survey respondents are likely to support improvement of MSW treatments that are not solely relying on landfilling technique. Recommendations for further studies are to obtain larger sample sizes in order to improve statistical powers and to provide better accuracy of WTP study.

Keywords: MSW, willingness to pay, payment card, waste seperation

Procedia PDF Downloads 268
232 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 219
231 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements

Authors: Denis A. Sokolov, Andrey V. Mazurkevich

Abstract:

In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.

Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement

Procedia PDF Downloads 27
230 Comparison of Two Home Sleep Monitors Designed for Self-Use

Authors: Emily Wood, James K. Westphal, Itamar Lerner

Abstract:

Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.

Keywords: DREEM, EEG, seep monitoring, Z-machine

Procedia PDF Downloads 82
229 Effect of Silica Nanoparticles on Three-Point Flexural Properties of Isogrid E-Glass Fiber/Epoxy Composite Structures

Authors: Hamed Khosravi, Reza Eslami-Farsani

Abstract:

Increased interest in lightweight and efficient structural components has created the need for selecting materials with improved mechanical properties. To do so, composite materials are being widely used in many applications, due to durability, high strength and modulus, and low weight. Among the various composite structures, grid-stiffened structures are extensively considered in various aerospace and aircraft applications, because of higher specific strength and stiffness, higher impact resistance, superior load-bearing capacity, easy to repair, and excellent energy absorption capability. Although there are a good number of publications on the design aspects and fabrication of grid structures, little systematic work has been reported on their material modification to improve their properties, to our knowledge. Therefore, the aim of this research is to study the reinforcing effect of silica nanoparticles on the flexural properties of epoxy/E-glass isogrid panels under three-point bending test. Samples containing 0, 1, 3, and 5 wt.% of the silica nanoparticles, with 44 and 48 vol.% of the glass fibers in the ribs and skin components respectively, were fabricated by using a manual filament winding method. Ultrasonic and mechanical routes were employed to disperse the nanoparticles within the epoxy resin. To fabricate the ribs, the unidirectional fiber rovings were impregnated with the matrix mixture (epoxy + nanoparticles) and then laid up into the grooves of a silicone mold layer-by-layer. At once, four plies of woven fabrics, after impregnating into the same matrix mixture, were layered on the top of the ribs to produce the skin part. In order to conduct the ultimate curing and to achieve the maximum strength, the samples were tested after 7 days of holding at room temperature. According to load-displacement graphs, the bellow trend was observed for all of the samples when loaded from the skin side; following an initial linear region and reaching a load peak, the curve was abruptly dropped and then showed a typical absorbed energy region. It would be worth mentioning that in these structures, a considerable energy absorption was observed after the primary failure related to the load peak. The results showed that the flexural properties of the nanocomposite samples were always higher than those of the nanoparticle-free sample. The maximum enhancement in flexural maximum load and energy absorption was found to be for the incorporation of 3 wt.% of the nanoparticles. Furthermore, the flexural stiffness was continually increased by increasing the silica loading. In conclusion, this study suggested that the addition of nanoparticles is a promising method to improve the flexural properties of grid-stiffened fibrous composite structures.

Keywords: grid-stiffened composite structures, nanocomposite, three point flexural test , energy absorption

Procedia PDF Downloads 315
228 Probing Mechanical Mechanism of Three-Hinge Formation on a Growing Brain: A Numerical and Experimental Study

Authors: Mir Jalil Razavi, Tianming Liu, Xianqiao Wang

Abstract:

Cortical folding, characterized by convex gyri and concave sulci, has an intrinsic relationship to the brain’s functional organization. Understanding the mechanism of the brain’s convoluted patterns can provide useful clues into normal and pathological brain function. During the development, the cerebral cortex experiences a noticeable expansion in volume and surface area accompanied by tremendous tissue folding which may be attributed to many possible factors. Despite decades of endeavors, the fundamental mechanism and key regulators of this crucial process remain incompletely understood. Therefore, to taking even a small role in unraveling of brain folding mystery, we present a mechanical model to find mechanism of 3-hinges formation in a growing brain that it has not been addressed before. A 3-hinge is defined as a gyral region where three gyral crests (hinge-lines) join. The reasons that how and why brain prefers to develop 3-hinges have not been answered very well. Therefore, we offer a theoretical and computational explanation to mechanism of 3-hinges formation in a growing brain and validate it by experimental observations. In theoretical approach, the dynamic behavior of brain tissue is examined and described with the aid of a large strain and nonlinear constitutive model. Derived constitute model is used in the computational model to define material behavior. Since the theoretical approach cannot predict the evolution of cortical complex convolution after instability, non-linear finite element models are employed to study the 3-hinges formation and secondary morphological folds of the developing brain. Three-dimensional (3D) finite element analyses on a multi-layer soft tissue model which mimics a small piece of the brain are performed to investigate the fundamental mechanism of consistent hinge formation in the cortical folding. Results show that after certain amount growth of cortex, mechanical model starts to be unstable and then by formation of creases enters to a new configuration with lower strain energy. By further growth of the model, formed shallow creases start to form convoluted patterns and then develop 3-hinge patterns. Simulation results related to 3-hinges in models show good agreement with experimental observations from macaque, chimpanzee and human brain images. These results have great potential to reveal fundamental principles of brain architecture and to produce a unified theoretical framework that convincingly explains the intrinsic relationship between cortical folding and 3-hinges formation. This achieved fundamental understanding of the intrinsic relationship between cortical folding and 3-hinges formation would potentially shed new insights into the diagnosis of many brain disorders such as schizophrenia, autism, lissencephaly and polymicrogyria.

Keywords: brain, cortical folding, finite element, three hinge

Procedia PDF Downloads 207
227 Photoemission Momentum Microscopy of Graphene on Ir (111)

Authors: Anna V. Zaporozhchenko, Dmytro Kutnyakhov, Katherina Medjanik, Christian Tusche, Hans-Joachim Elmers, Olena Fedchenko, Sergey Chernov, Martin Ellguth, Sergej A. Nepijko, Gerd Schoenhense

Abstract:

Graphene reveals a unique electronic structure that predetermines many intriguing properties such as massless charge carriers, optical transparency and high velocity of fermions at the Fermi level, opening a wide horizon of future applications. Hence, a detailed investigation of the electronic structure of graphene is crucial. The method of choice is angular resolved photoelectron spectroscopy ARPES. Here we present experiments using time-of-flight (ToF) momentum microscopy, being an alternative way of ARPES using full-field imaging of the whole Brillouin zone (BZ) and simultaneous acquisition of up to several 100 energy slices. Unlike conventional ARPES, k-microscopy is not limited in simultaneous k-space access. We have recorded the whole first BZ of graphene on Ir(111) including all six Dirac cones. As excitation source we used synchrotron radiation from BESSY II (Berlin) at the U125-2 NIM, providing linearly polarized (both polarizations p- and s-) VUV radiation. The instrument uses a delay-line detector for single-particle detection up the 5 Mcps range and parallel energy detection via ToF recording. In this way, we gather a 3D data stack I(E,kx,ky) of the full valence electronic structure in approx. 20 mins. Band dispersion stacks were measured in the energy range of 14 eV up to 23 eV with steps of 1 eV. The linearly-dispersing graphene bands for all six K and K’ points were simultaneously recorded. We find clear features of hybridization with the substrate, in particular in the linear dichroism in the angular distribution (LDAD). Recording of the whole Brillouin zone of graphene/Ir(111) revealed new features. First, the intensity differences (i.e. the LDAD) are very sensitive to the interaction of graphene bands with substrate bands. Second, the dark corridors are investigated in detail for both, p- and s- polarized radiation. They appear as local distortions of photoelectron current distribution and are induced by quantum mechanical interference of graphene sublattices. The dark corridors are located in different areas of the 6 Dirac cones and show chirality behaviour with a mirror plane along vertical axis. Moreover, two out of six show an oval shape while the rest are more circular. It clearly indicates orientation dependence with respect to E vector of incident light. Third, a pattern of faint but very sharp lines is visible at energies around 22eV that strongly remind on Kikuchi lines in diffraction. In conclusion, the simultaneous study of all six Dirac cones is crucial for a complete understanding of dichroism phenomena and the dark corridor.

Keywords: band structure, graphene, momentum microscopy, LDAD

Procedia PDF Downloads 311
226 Examining the Independent Effects of Early Exposure to Game Consoles and Parent-Child Activities on Psychosocial Development

Authors: Rosa S. Wong, Keith T. S. Tung, Frederick K. Ho, Winnie W. Y. Tso, King-wa Fu, Nirmala Rao, Patrick Ip

Abstract:

As technology advances, exposures in early childhood are no longer confined to stimulations in the surrounding physical environments. Children nowadays are also subject to influences from the digital world. In particular, early access to game consoles can cause risks to child development, especially when the game is not developmentally appropriate for young children. Overstimulation is possible and could impair brain development. On the other hand, recreational parent-child activities, including outdoor activities and visits to museums, require child interaction with parents, which is beneficial for developing adaptive emotion regulation and social skills. Given the differences between these two types of exposures, this study investigated and compared the independent effects of early exposure to a game console and early play-based parent-child activities on children’s long-term psychosocial outcomes. This study used data from a subset of children (n=304, 142 male and 162 female) in the longitudinal cohort study, which studied the long-term impact of family socioeconomic status on child development. In 2012/13, we recruited a group of children at Kindergarten 3 (K3) randomly from Hong Kong local kindergartens and collected data regarding their duration of exposure to game console and recreational parent-child activities at that time. In 2018/19, we re-surveyed the parents of these children who were matriculated as Form 1 (F1) students (ages ranging from 11 to 13 years) in secondary schools and asked the parents to rate their children’s psychosocial problems in F1. Linear regressions were conducted to examine the associations between early exposures and adolescent psychosocial problems with and without adjustment for child gender and K3 family socioeconomic status. On average, K3 children spent about 42 minutes on a game console every day and had 2-3 recreational activities with their parents every week. Univariate analyses showed that more time spent on game consoles at K3 was associated with more psychosocial difficulties in F1 particularly more externalizing problems. The effect of early exposure to game console on externalizing behavior remained significant (B=0.59, 95%CI: 0.15 to 1.03, p=0.009) after adjusting for recreational parent-child activities and child gender. For recreational parent-child activities at K3, its effect on overall psychosocial difficulties became insignificant after adjusting for early exposure to game consoles and child gender. However, it was found to have significant protective effect on externalizing problems (B=-0.65, 95%CI: -1.23 to -0.07, p=0.028) even after adjusting for the confounders. Early exposure to game consoles has negative impact on children’s psychosocial health, whereas play-based parent-child activities can foster positive psychosocial outcomes. More efforts should be directed to propagate the risks and benefits of these activities and urge the parents and caregivers to replace child-alone screen time with parent-child play time in daily routine.

Keywords: early childhood, electronic device, parenting, psychosocial wellbeing

Procedia PDF Downloads 137
225 Recycling Service Strategy by Considering Demand-Supply Interaction

Authors: Hui-Chieh Li

Abstract:

Circular economy promotes greater resource productivity and avoids pollution through greater recycling and re-use which bring benefits for both the environment and the economy. The concept is contrast to a linear economy which is ‘take, make, dispose’ model of production. A well-design reverse logistics service strategy could enhance the willingness of recycling of the users and reduce the related logistics cost as well as carbon emissions. Moreover, the recycle brings the manufacturers most advantages as it targets components for closed-loop reuse, essentially converting materials and components from worn-out product into inputs for new ones at right time and right place. This study considers demand-supply interaction, time-dependent recycle demand, time-dependent surplus value of recycled product and constructs models on recycle service strategy for the recyclable waste collector. A crucial factor in optimizing a recycle service strategy is consumer demand. The study considers the relationships between consumer demand towards recycle and product characteristics, surplus value and user behavior. The study proposes a recycle service strategy which differs significantly from the conventional and typical uniform service strategy. Periods with considerable demand and large surplus product value suggest frequent and short service cycle. The study explores how to determine a recycle service strategy for recyclable waste collector in terms of service cycle frequency and duration and vehicle type for all service cycles by considering surplus value of recycled product, time-dependent demand, transportation economies and demand-supply interaction. The recyclable waste collector is responsible for the collection of waste product for the manufacturer. The study also examines the impacts of utilization rate on the cost and profit in the context of different sizes of vehicles. The model applies mathematical programming methods and attempts to maximize the total profit of the distributor during the study period. This study applies the binary logit model, analytical model and mathematical programming methods to the problem. The model specifically explores how to determine a recycle service strategy for the recycler by considering product surplus value, time-dependent recycle demand, transportation economies and demand-supply interaction. The model applies mathematical programming methods and attempts to minimize the total logistics cost of the recycler and maximize the recycle benefits of the manufacturer during the study period. The study relaxes the constant demand assumption and examines how service strategy affects consumer demand towards waste recycling. Results of the study not only help understanding how the user demand for recycle service and product surplus value affects the logistics cost and manufacturer’s benefits, but also provide guidance such as award bonus and carbon emission regulations for the government.

Keywords: circular economy, consumer demand, product surplus value, recycle service strategy

Procedia PDF Downloads 373
224 Work-Family Conflict and Family and Job Resources among Women: The Role of Negotiation

Authors: Noa Nelson, Meitar Moshe, Dana Cohen

Abstract:

Work-family conflict (WFC) is a significant source of stress for contemporary employees, with research indicating its heightened severity for women. The conservation of resources theory argues that individuals experience stress when their resources fall short of demands, and attempt to reach balance by obtaining resources. Presumably then, to achieve work-family balance women would need to negotiate for resources such as spouse support, employer support and work flexibility. The current research tested the hypotheses that competent negotiation at home and at work associated with increased family and job resources and with decreased WFC, as well as with higher work, marital and life satisfaction. In the first study, 113 employed mothers, married or cohabiting, reported to what extent they conducted satisfactory negotiation with spouse over division of housework, and their actual housework load compared to spouse. They answered a WFC questionnaire, measuring how much work interferes with family (WIF) and how much family interferes with work (FIW), and finally, measurements of satisfaction. In the second study, 94 employed mothers, married or cohabiting reported to what extent they conducted satisfactory negotiation with their boss over balancing work demands with family needs. They reported the levels of three job resources: flexibility, control and family-friendly organizational culture. Finally, they answered the same WFC and satisfaction measurements from study 1. Statistical analyses –t-tests, correlations, and hierarchical linear regressions- showed that in both studies, women reported higher WIF than FIW. Negotiations associated with increased resources: support from spouse, work flexibility and control and a family-friendly culture; negotiation with spouse associated also with satisfaction measurements. However, negotiations or resources (except family-friendly culture) did not associate with reduced conflict. The studies demonstrate the role of negotiation in obtaining family and job resources. Causation cannot be determined, but the fact is that employed mothers who enjoyed more support (at both home and work), flexibility and control, were more likely to keep active interactions to increase them. This finding has theoretical and practical implications, especially in view of research on female avoidance of negotiation. It is intriguing that negotiations and resources generally did not associate with reduced WFC. This finding might reflect the severity of the conflict, especially of work interfering with family, which characterizes many contemporary jobs. It might also suggest that employed mothers have high expectations from themselves, and even under supportive circumstances, experience the challenge of balancing two significant and demanding roles. The research contributes to the fields of negotiation, gender, and work-life balance. It calls for further studies, to test its model in additional populations and validate the role employees have in actively negotiating for the balance that they need. It also calls for further research to understand the contributions of job and family resources to reducing work-family conflict, and the circumstances under which they contribute.

Keywords: sork-family conflict, work-life balance, negotiation, gender, job resources, family resources

Procedia PDF Downloads 196