Search results for: radial basis function networks (RBFN)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10591

Search results for: radial basis function networks (RBFN)

331 Transducers for Measuring Displacements of Rotating Blades in Turbomachines

Authors: Pavel Prochazka

Abstract:

The study deals with transducers for measuring vibration displacements of rotating blade tips in turbomachines. In order to prevent major accidents with extensive economic consequences, it shows an urgent need for every low-pressure steam turbine stage being equipped with modern non-contact measuring system providing information on blade loading, damage and residual lifetime under operation. The requirement of measuring vibration and static characteristics of steam turbine blades, therefore, calls for the development and operational verification of both new types of sensors and measuring principles and methods. The task is really demanding: to measure displacements of blade tips with a resolution of the order of 10 μm by speeds up to 750 m/s, humidity 100% and temperatures up to 200 °C. While in gas turbines are used primarily capacitive and optical transducers, these transducers cannot be used in steam turbines. The reason is moisture vapor, droplets of condensing water and dirt, which disable the function of sensors. Therefore, the most feasible approach was to focus on research of electromagnetic sensors featuring promising characteristics for given blade materials in a steam environment. Following types of sensors have been developed and both experimentally and theoretically studied in the Institute of Thermodynamics, Academy of Sciences of the Czech Republic: eddy-current, Hall effect, inductive and magnetoresistive. Eddy-current transducers demand a small distance of 1 to 2 mm and change properties in the harsh environment of steam turbines. Hall effect sensors have relatively low sensitivity, high values of offset, drift, and especially noise. Induction sensors do not require any supply current and have a simple construction. The magnitude of the sensors output voltage is dependent on the velocity of the measured body and concurrently on the varying magnetic induction, and they cannot be used statically. Magnetoresistive sensors are formed by magnetoresistors arranged into a Wheatstone bridge. Supplying the sensor from a current source provides better linearity. The MR sensors can be used permanently for temperatures up to 200 °C at lower values of the supply current of about 1 mA. The frequency range of 0 to 300 kHz is by an order higher comparing to the Hall effect and induction sensors. The frequency band starts at zero frequency, which is very important because the sensors can be calibrated statically. The MR sensors feature high sensitivity and low noise. The symmetry of the bridge arrangement leads to a high common mode rejection ratio and suppressing disturbances, which is important, especially in industrial applications. The MR sensors feature high sensitivity, high common mode rejection ratio, and low noise, which is important, especially in industrial applications. Magnetoresistive transducers provide a range of excellent properties indicating their priority for displacement measurements of rotating blades in turbomachines.

Keywords: turbines, blade vibration, blade tip timing, non-contact sensors, magnetoresistive sensors

Procedia PDF Downloads 103
330 Negative Changes in Sexual Behavior of Pregnant Women

Authors: Glauberto S. Quirino, Emanuelly V. Pereira, Amana S. Figueiredo, Antonia T. F. Santos, Paulo R. A. Firmino, Denise F. F. Barbosa, Caroline B. Q. Aquino, Eveliny S. Martins, Cinthia G. P. Calou, Ana K. B. Pinheiro

Abstract:

Introduction: During pregnancy there are adjustments in the physical, emotional, existential and sexual areas, which may contribute to changes in sexual behavior. The objective was to analyze the sexual behavior of pregnant women. Methods: Quantitative, exploratory-descriptive study, approved by the Ethics and Research Committee of the Regional University of Cariri. For data collection, it was used the Sexuality Questionnaire in Gestation and Sexual Quotient - Female Version. It was carried out in public institutions in the urban and rural areas of three municipalities of the Metropolitan Region of Cariri, south of Ceará, Brazil from February to September 2016. The sampling was proportional stratified by convenience. A total of 815 pregnant women who were literate and aged 20 years or over were broached. 461 pregnant women were excluded because of high risk, adolescence, saturation of the extract, incomplete filling of the instrument, mental and physical handicap, without sexual partner, and the sample was 354 pregnant. The data were grouped, organized and analyzed in the statistical program R Studio (version 386 3.2.4). Descriptive frequency statistics and non-parametric tests were used to analyze the variables, and the results were shown in graphs and tables. Results: The women presented a minimum age of 20, maximum 35 and average of 26.9 years, predominantly urban area residents, with a monthly income of up to one minimum wage (US$ 275,00), high school, catholic, with fixed partner, heterosexuals, multiparous, multiple sexual partners throughout life and with the beginning of sexual life in adolescence (median age 17 years). There was a reduction in sexual practices (67%) and when they were performed, they were more frequent in the first trimester (79.7%) and less frequent in the third trimester (30.5%). Preliminary sexual practices did not change and were more frequent in the second trimester (46.6%). Throughout the gestational trimesters, the partner was referred as the main responsible for the sexual initiative. The women performed vaginal sex (97.7%) and provided greater pleasure (42.8%) compared to non-penetrative sex (53.9%) (oral sex and masturbation). There was also a reduction in the sexual disposition of pregnant women (90.7%) and partner (72.9%), mainly in the first trimester (78.8%), and sexual positions. Sexual performance ranged from regular to good (49.7%). Level of schooling, marital status, sexual orientation of the pregnant woman and the partner, sexual practices and positions, preliminaries, frequency of sexual practices and importance attributed to them were variables that influenced negatively sexual performance and satisfaction. It is concluded that pregnancy negatively changes the sexual behavior of the women and it is suggested to further investigations and approach of the partner, in order to clarify the influence of these variables on the sexual function and subsidize intervention strategies, with a view to the integrality of sexual and reproductive health.

Keywords: obstetric nursing, pregnant women, sexual behavior, women's health

Procedia PDF Downloads 304
329 Chatbots vs. Websites: A Comparative Analysis Measuring User Experience and Emotions in Mobile Commerce

Authors: Stephan Boehm, Julia Engel, Judith Eisser

Abstract:

During the last decade communication in the Internet transformed from a broadcast to a conversational model by supporting more interactive features, enabling user generated content and introducing social media networks. Another important trend with a significant impact on electronic commerce is a massive usage shift from desktop to mobile devices. However, a presentation of product- or service-related information accumulated on websites, micro pages or portals often remains the pivot and focal point of a customer journey. A more recent change of user behavior –especially in younger user groups and in Asia– is going along with the increasing adoption of messaging applications supporting almost real-time but asynchronous communication on mobile devices. Mobile apps of this type cannot only provide an alternative for traditional one-to-one communication on mobile devices like voice calls or short messaging service. Moreover, they can be used in mobile commerce as a new marketing and sales channel, e.g., for product promotions and direct marketing activities. This requires a new way of customer interaction compared to traditional mobile commerce activities and functionalities provided based on mobile web-sites. One option better aligned to the customer interaction in mes-saging apps are so-called chatbots. Chatbots are conversational programs or dialog systems simulating a text or voice based human interaction. They can be introduced in mobile messaging and social media apps by using rule- or artificial intelligence-based imple-mentations. In this context, a comparative analysis is conducted to examine the impact of using traditional websites or chatbots for promoting a product in an impulse purchase situation. The aim of this study is to measure the impact on the customers’ user experi-ence and emotions. The study is based on a random sample of about 60 smartphone users in the group of 20 to 30-year-olds. Participants are randomly assigned into two groups and participate in a traditional website or innovative chatbot based mobile com-merce scenario. The chatbot-based scenario is implemented by using a Wizard-of-Oz experimental approach for reasons of sim-plicity and to allow for more flexibility when simulating simple rule-based and more advanced artificial intelligence-based chatbot setups. A specific set of metrics is defined to measure and com-pare the user experience in both scenarios. It can be assumed, that users get more emotionally involved when interacting with a system simulating human communication behavior instead of browsing a mobile commerce website. For this reason, innovative face-tracking and analysis technology is used to derive feedback on the emotional status of the study participants while interacting with the website or the chatbot. This study is a work in progress. The results will provide first insights on the effects of chatbot usage on user experiences and emotions in mobile commerce environments. Based on the study findings basic requirements for a user-centered design and implementation of chatbot solutions for mobile com-merce can be derived. Moreover, first indications on situations where chatbots might be favorable in comparison to the usage of traditional website based mobile commerce can be identified.

Keywords: chatbots, emotions, mobile commerce, user experience, Wizard-of-Oz prototyping

Procedia PDF Downloads 437
328 Phonological Encoding and Working Memory in Kannada Speaking Adults Who Stutter

Authors: Nirmal Sugathan, Santosh Maruthy

Abstract:

Background: A considerable number of studies have evidenced that phonological encoding (PE) and working memory (WM) skills operate differently in adults who stutter (AWS). In order to tap these skills, several paradigms have been employed such as phonological priming, phoneme monitoring, and nonword repetition tasks. This study, however, utilizes a word jumble paradigm to assess both PE and WM using different modalities and this may give a better understanding of phonological processing deficits in AWS. Aim: The present study investigated PE and WM abilities in conjunction with lexical access in AWS using jumbled words. The study also aimed at investigating the effect of increase in cognitive load on phonological processing in AWS by comparing the speech reaction time (SRT) and accuracy scores across various syllable lengths. Method: Participants were 11 AWS (Age range=19-26) and 11 adults who do not stutter (AWNS) (Age range=19-26) matched for age, gender and handedness. Stimuli: Ninety 3-, 4-, and 5-syllable jumbled words (JWs) (n=30 per syllable length category) constructed from Kannada words served as stimuli for jumbled word paradigm. In order to generate jumbled words (JWs), the syllables in the real words were randomly transpositioned. Procedures: To assess PE, the JWs were presently visually using DMDX software and for WM task, JWs were presented through auditory mode through headphones. The participants were asked to silently manipulate the jumbled words to form a Kannada real word and verbally respond once. The responses for both tasks were audio recorded using record function in DMDX software and the recorded responses were analyzed using PRAAT software to calculate the SRT. Results: SRT: Mann-Whitney test results demonstrated that AWS performed significantly slower on both tasks (p < 0.001) as indicated by increased SRT. Also, AWS presented with increased SRT on both the tasks in all syllable length conditions (p < 0.001). Effect of syllable length: Wilcoxon signed rank test was carried out revealed that, on task assessing PE, the SRT of 4syllable JWs were significantly higher in both AWS (Z= -2.93, p=.003) and AWNS (Z= -2.41, p=.003) when compared to 3-syllable words. However, the findings for 4- and 5-syllable words were not significant. Task Accuracy: The accuracy scores were calculated for three syllable length conditions for both PE and PM tasks and were compared across the groups using Mann-Whitney test. The results indicated that the accuracy scores of AWS were significantly below that of AWNS in all the three syllable conditions for both the tasks (p < 0.001). Conclusion: The above findings suggest that PE and WM skills are compromised in AWS as indicated by increased SRT. Also, AWS were progressively less accurate in descrambling JWs of increasing syllable length and this may be interpreted as, rather than existing as a uniform deficiency, PE and WM deficits emerge when the cognitive load is increased. AWNS exhibited increased SRT and increased accuracy for JWs of longer syllable length whereas AWS was not benefited from increasing the reaction time, thus AWS had to compromise for both SRT and accuracy while solving JWs of longer syllable length.

Keywords: adults who stutter, phonological ability, working memory, encoding, jumbled words

Procedia PDF Downloads 216
327 Numerical Analysis of Mandible Fracture Stabilization System

Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski

Abstract:

The aim of the presented work is to recognize the impact of mini-plate application approach on the stress and displacement within the stabilization devices and surrounding bones. The mini-plate osteosynthesis technique is widely used by craniofacial surgeons as an improved replacement of wire connection approach. Many different types of metal plates and screws are used to the physical connection of fractured bones. Below investigation is based on a clinical observation of patient hospitalized with mini-plate stabilization system. Analysis was conducted on a solid mandible geometry, which was modeled basis on the computed tomography scan of the hospitalized patient. In order to achieve most realistic connected system behavior, the cortical and cancellous bone layers were assumed. The temporomandibular joint was simplified to the elastic element to allow physiological movement of loaded bone. The muscles of mastication system were reduced to three pairs, modeled as shell structures. Finite element grid was created by the ANSYS software, where hexahedral and tetrahedral variants of SOLID185 element were used. A set of nonlinear contact conditions were applied on connecting devices and bone common surfaces. Properties of particular contact pair depend on screw - mini-plate connection type and possible gaps between fractured bone around osteosynthesis region. Some of the investigated cases contain prestress introduced to the mini-plate during the application, what responds the initial bending of the connecting device to fit the retromolar fossa region. Assumed bone fracture occurs within the mandible angle zone. Due to the significant deformation of the connecting plate in some of the assembly cases the elastic-plastic model of titanium alloy was assumed. The bone tissues were covered by the orthotropic material. As a loading were used the gauge force of magnitude of 100N applied in three different locations. Conducted analysis shows significant impact of mini-plate application methodology on the stress distribution within the miniplate. Prestress effect introduces additional loading, which leads to locally exceed the titanium alloy yield limit. Stress in surrounding bone increases rapidly around the screws application region, exceeding assumed bone yield limit, what indicate the local bone destruction. Approach with the doubled mini-plate shows increased stress within the connector due to the too rigid connection, where the main path of loading leads through the mini-plates instead of plates and connected bones. Clinical observations confirm more frequent plate destruction of stiffer connections. Some of them could be an effect of decreased low cyclic fatigue capability caused by the overloading. The executed analysis prove that the mini-plate system provides sufficient support to mandible fracture treatment, however, many applicable solutions shifts the entire system to the allowable material limits. The results show that connector application with the initial loading needs to be carefully established due to the small material capability tolerances. Comparison to the clinical observations allows optimizing entire connection to prevent future incidents.

Keywords: mandible fracture, mini-plate connection, numerical analysis, osteosynthesis

Procedia PDF Downloads 252
326 Membrane Permeability of Middle Molecules: A Computational Chemistry Approach

Authors: Sundaram Arulmozhiraja, Kanade Shimizu, Yuta Yamamoto, Satoshi Ichikawa, Maenaka Katsumi, Hiroaki Tokiwa

Abstract:

Drug discovery is shifting from small molecule based drugs targeting local active site to middle molecules (MM) targeting large, flat, and groove-shaped binding sites, for example, protein-protein interface because at least half of all targets assumed to be involved in human disease have been classified as “difficult to drug” with traditional small molecules. Hence, MMs such as peptides, natural products, glycans, nucleic acids with various high potent bioactivities become important targets for drug discovery programs in the recent years as they could be used for ‘undruggable” intracellular targets. Cell membrane permeability is one of the key properties of pharmacodynamically active MM drug compounds and so evaluating this property for the potential MMs is crucial. Computational prediction for cell membrane permeability of molecules is very challenging; however, recent advancement in the molecular dynamics simulations help to solve this issue partially. It is expected that MMs with high membrane permeability will enable drug discovery research to expand its borders towards intracellular targets. Further to understand the chemistry behind the permeability of MMs, it is necessary to investigate their conformational changes during the permeation through membrane and for that their interactions with the membrane field should be studied reliably because these interactions involve various non-bonding interactions such as hydrogen bonding, -stacking, charge-transfer, polarization dispersion, and non-classical weak hydrogen bonding. Therefore, parameters-based classical mechanics calculations are hardly sufficient to investigate these interactions rather, quantum mechanical (QM) calculations are essential. Fragment molecular orbital (FMO) method could be used for such purpose as it performs ab initio QM calculations by dividing the system into fragments. The present work is aimed to study the cell permeability of middle molecules using molecular dynamics simulations and FMO-QM calculations. For this purpose, a natural compound syringolin and its analogues were considered in this study. Molecular simulations were performed using NAMD and Gromacs programs with CHARMM force field. FMO calculations were performed using the PAICS program at the correlated Resolution-of-Identity second-order Moller Plesset (RI-MP2) level with the cc-pVDZ basis set. The simulations clearly show that while syringolin could not permeate the membrane, its selected analogues go through the medium in nano second scale. These correlates well with the existing experimental evidences that these syringolin analogues are membrane-permeable compounds. Further analyses indicate that intramolecular -stacking interactions in the syringolin analogues influenced their permeability positively. These intramolecular interactions reduce the polarity of these analogues so that they could permeate the lipophilic cell membrane. Conclusively, the cell membrane permeability of various middle molecules with potent bioactivities is efficiently studied using molecular dynamics simulations. Insight of this behavior is thoroughly investigated using FMO-QM calculations. Results obtained in the present study indicate that non-bonding intramolecular interactions such as hydrogen-bonding and -stacking along with the conformational flexibility of MMs are essential for amicable membrane permeation. These results are interesting and are nice example for this theoretical calculation approach that could be used to study the permeability of other middle molecules. This work was supported by Japan Agency for Medical Research and Development (AMED) under Grant Number 18ae0101047.

Keywords: fragment molecular orbital theory, membrane permeability, middle molecules, molecular dynamics simulation

Procedia PDF Downloads 164
325 Interdependence of Vocational Skills and Employability Skills: Example of an Industrial Training Centre in Central India

Authors: Mahesh Vishwakarma, Sadhana Vishwakarma

Abstract:

Vocational education includes all kind of education which can help students to acquire skills related to a certain profession, art, or activity so that they are able to exercise that profession, art or activity after acquiring such qualification. However, in this global economy of the modern world, job seekers are expected to have certain soft skills over and above the technical knowledge and skills acquired in their areas of expertise. These soft skills include but not limited to interpersonal communication, understanding, personal attributes, problem-solving, working in team, quick adaptability to the workplace environment, and other. Not only the hands-on, job-related skills, and competencies are now being sought by the employers, but also a complex of attitudinal dispositions and affective traits are being looked by them in their prospective employees. This study was performed to identify the employability skills of technical students from an Industrial Training Centre (ITC) in central India. It also aimed to convey a message to the students currently on the role, that for them to remain relevant in the job market, they would need to constantly adapt to changes and evolving requirements in the work environment, including the use of updated technologies. Five hypotheses were formulated and tested on the employability skills of students as a function of gender, trade, work experience, personal attributes, and IT skills. Data were gathered with the help of center’s training officers who approached 200 recently graduated students from the center and administered the instrument to students. All 200 respondents returned the completed instrument. The instrument used for the study consisted of 2 sections; demographic details and employability skills. To measure the employability skills of the trainees, the instrument was developed by referring to the several instruments developed by the past researchers for similar studies. The 1st section of the instrument of demographic details recorded age, gender, trade, year of passing, interviews faced, and employment status of the respondents. The 2nd section of the instrument on employability skills was categorized into seven specific skills: basic vocational skills; personal attributes; imagination skills; optimal management of resources; information-technology skills; interpersonal skills; adapting to new technologies. The reliability and validity of the instrument were checked. The findings revealed valuable information on the relationship and interdependence of vocational education and employability skills of students in the central Indian scenario. The findings revealed a valuable information on supplementing the existing vocational education programs with few soft skills and competencies so as to develop a superior workforce much better equipped to face the job market. The findings of the study can be used as an example by the management of government and private industrial training centers operating in the other parts of the Asian region. Future research can be undertaken on a greater population base from different geographical regions and backgrounds for an enhanced outcome.

Keywords: employability skills, vocational education, industrial training centers, students

Procedia PDF Downloads 115
324 Innovation Eco-Systems and Cities: Sustainable Innovation and Urban Form

Authors: Claudia Trillo

Abstract:

Regional innovation eco-ecosystems are composed of a variety of interconnected urban innovation eco-systems, mutually reinforcing each other and making the whole territorial system successful. Combining principles drawn from the new economic growth theory and from the socio-constructivist approach to the economic growth, with the new geography of innovation emerging from the networked nature of innovation districts, this paper explores the spatial configuration of urban innovation districts, with the aim of unveiling replicable spatial patterns and transferable portfolios of urban policies. While some authors suggest that cities should be considered ideal natural clusters, supporting cross-fertilization and innovation thanks to the physical setting they provide to the construction of collective knowledge, still a considerable distance persists between regional development strategies and urban policies. Moreover, while public and private policies supporting entrepreneurship normally consider innovation as the cornerstone of any action aimed at uplifting the competitiveness and economic success of a certain area, a growing body of literature suggests that innovation is non-neutral, hence, it should be constantly assessed against equity and social inclusion. This paper draws from a robust qualitative empirical dataset gathered through 4-years research conducted in Boston to provide readers with an evidence-based set of recommendations drawn from the lessons learned through the investigation of the chosen innovation districts in the Boston area. The evaluative framework used for assessing the overall performance of the chosen case studies stems from the Habitat III Sustainable Development Goals rationale. The concept of inclusive growth has been considered essential to assess the social innovation domain in each of the chosen cases. The key success factors for the development of the Boston innovation ecosystem can be generalized as follows: 1) a quadruple helix model embedded in the physical structure of the two cities (Boston and Cambridge), in which anchor Higher Education (HE) institutions continuously nurture the Entrepreneurial Environment. 2) an entrepreneurial approach emerging from the local governments, eliciting risk-taking and bottom-up civic participation in tackling key issues in the city. 3) a networking structure of some intermediary actors supporting entrepreneurial collaboration, cross-fertilization and co-creation, which collaborate at multiple-scales thus enabling positive spillovers from the stronger to the weaker contexts. 4) awareness of the socio-economic value of the built environment as enabler of cognitive networks allowing activation of the collective intelligence. 5) creation of civic-led spaces enabling grassroot collaboration and cooperation. Evidence shows that there is not a single magic recipe for the successful implementation of place-based and social innovation-driven strategies. On the contrary, the variety of place-grounded combinations of micro and macro initiatives, embedded in the social and spatial fine grain of places and encompassing a diversity of actors, can create the conditions enabling places to thrive and local economic activities to grow in a sustainable way.

Keywords: innovation-driven sustainable Eco-systems , place-based sustainable urban development, sustainable innovation districts, social innovation, urban policie

Procedia PDF Downloads 87
323 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System

Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko

Abstract:

Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.

Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic

Procedia PDF Downloads 34
322 An Integrated Water Resources Management Approach to Evaluate Effects of Transportation Projects in Urbanized Territories

Authors: Berna Çalışkan

Abstract:

The integrated water management is a colloborative approach to planning that brings together institutions that influence all elements of the water cycle, waterways, watershed characteristics, wetlands, ponds, lakes, floodplain areas, stream channel structure. It encourages collaboration where it will be beneficial and links between water planning and other planning processes that contribute to improving sustainable urban development and liveability. Hydraulic considerations can influence the selection of a highway corridor and the alternate routes within the corridor. widening a roadway, replacing a culvert, or repairing a bridge. Because of this, the type and amount of data needed for planning studies can vary widely depending on such elements as environmental considerations, class of the proposed highway, state of land use development, and individual site conditions. The extraction of drainage networks provide helpful preliminary drainage data from the digital elevation model (DEM). A case study was carried out using the Arc Hydro extension within ArcGIS in the study area. It provides the means for processing and presenting spatially-referenced Stream Model. Study area’s flow routing, stream levels, segmentation, drainage point processing can be obtained using DEM as the 'Input surface raster'. These processes integrate the fields of hydrologic, engineering research, and environmental modeling in a multi-disciplinary program designed to provide decision makers with a science-based understanding, and innovative tools for, the development of interdisciplinary and multi-level approach. This research helps to manage transport project planning and construction phases to analyze the surficial water flow, high-level streams, wetland sites for development of transportation infrastructure planning, implementing, maintenance, monitoring and long-term evaluations to better face the challenges and solutions associated with effective management and enhancement to deal with Low, Medium, High levels of impact. Transport projects are frequently perceived as critical to the ‘success’ of major urban, metropolitan, regional and/or national development because of their potential to affect significant socio-economic and territorial change. In this context, sustaining and development of economic and social activities depend on having sufficient Water Resources Management. The results of our research provides a workflow to build a stream network how can classify suitability map according to stream levels. Transportation projects establish, develop, incorporate and deliver effectively by selecting best location for reducing construction maintenance costs, cost-effective solutions for drainage, landslide, flood control. According to model findings, field study should be done for filling gaps and checking for errors. In future researches, this study can be extended for determining and preventing possible damage of Sensitive Areas and Vulnerable Zones supported with field investigations.

Keywords: water resources management, hydro tool, water protection, transportation

Procedia PDF Downloads 37
321 Influence of Counter-Face Roughness on the Friction of Bionic Microstructures

Authors: Haytam Kasem

Abstract:

The problem of quick and easy reversible attachment has become of great importance in different fields of technology. For the reason, during the last decade, a new emerging field of adhesion science has been developed. Essentially inspired by some animals and insects, which during their natural evolution have developed fantastic biological attachment systems allowing them to adhere and run on walls and ceilings of uneven surfaces. Potential applications of engineering bio-inspired solutions include climbing robots, handling systems for wafers in nanofabrication facilities, and mobile sensor platforms, to name a few. However, despite the efforts provided to apply bio-inspired patterned adhesive-surfaces to the biomedical field, they are still in the early stages compared with their conventional uses in other industries mentioned above. In fact, there are some critical issues that still need to be addressed for the wide usage of the bio-inspired patterned surfaces as advanced biomedical platforms. For example, surface durability and long-term stability of surfaces with high adhesive capacity should be improved, but also the friction and adhesion capacities of these bio-inspired microstructures when contacting rough surfaces. One of the well-known prototypes for bio-inspired attachment systems is biomimetic wall-shaped hierarchical microstructure for gecko-like attachments. Although physical background of these attachment systems is widely understood, the influence of counter-face roughness and its relationship with the friction force generated when sliding against wall-shaped hierarchical microstructure have yet to be fully analyzed and understood. To elucidate the effect of the counter-face roughness on the friction of biomimetic wall-shaped hierarchical microstructure we have replicated the isotropic topography of 12 different surfaces using replicas made of the same epoxy material. The different counter-faces were fully characterized under 3D optical profilometer to measure roughness parameters. The friction forces generated by spatula-shaped microstructure in contact with the tested counter-faces were measured on a home-made tribometer and compared with the friction forces generated by the spatulae in contact with a smooth reference. It was found that classical roughness parameters, such as average roughness Ra and others, could not be utilized to explain topography-related variation in friction force. This has led us to the development of an integrated roughness parameter obtained by combining different parameters which are the mean asperity radius of curvature (R), the asperity density (η), the deviation of asperities high (σ) and the mean asperities angle (SDQ). This new integrated parameter is capable of explaining the variation of results of friction measurements. Based on the experimental results, we developed and validated an analytical model to predict the variation of the friction force as a function of roughness parameters of the counter-face and the applied normal load, as well.

Keywords: friction, bio-mimetic micro-structure, counter-face roughness, analytical model

Procedia PDF Downloads 225
320 Biophysical Analysis of the Interaction of Polymeric Nanoparticles with Biomimetic Models of the Lung Surfactant

Authors: Weiam Daear, Patrick Lai, Elmar Prenner

Abstract:

The human body offers many avenues that could be used for drug delivery. The pulmonary route, which is delivered through the lungs, presents many advantages that have sparked interested in the field. These advantages include; 1) direct access to the lungs and the large surface area it provides, and 2) close proximity to the blood circulation. The air-blood barrier of the alveoli is about 500 nm thick. The air-blood barrier consist of a monolayer of lipids and few proteins called the lung surfactant and cells. This monolayer consists of ~90% lipids and ~10% proteins that are produced by the alveolar epithelial cells. The two major lipid classes constitutes of various saturation and chain length of phosphatidylcholine (PC) and phosphatidylglycerol (PG) representing 80% of total lipid component. The major role of the lung surfactant monolayer is to reduce surface tension experienced during breathing cycles in order to prevent lung collapse. In terms of the pulmonary drug delivery route, drugs pass through various parts of the respiratory system before reaching the alveoli. It is at this location that the lung surfactant functions as the air-blood barrier for drugs. As the field of nanomedicine advances, the use of nanoparticles (NPs) as drug delivery vehicles is becoming very important. This is due to the advantages NPs provide with their large surface area and potential specific targeting. Therefore, studying the interaction of NPs with lung surfactant and whether they affect its stability becomes very essential. The aim of this research is to develop a biomimetic model of the human lung surfactant followed by a biophysical analysis of the interaction of polymeric NPs. This biomimetic model will function as a fast initial mode of testing for whether NPs affect the stability of the human lung surfactant. The model developed thus far is an 8-component lipid system that contains major PC and PG lipids. Recently, a custom made 16:0/16:1 PC and PG lipids were added to the model system. In the human lung surfactant, these lipids constitute 16% of the total lipid component. According to the author’s knowledge, there is not much monolayer data on the biophysical analysis of the 16:0/16:1 lipids, therefore more analysis will be discussed here. Biophysical techniques such as the Langmuir Trough is used for stability measurements which monitors changes to a monolayer's surface pressure upon NP interaction. Furthermore, Brewster Angle Microscopy (BAM) employed to visualize changes to the lateral domain organization. Results show preferential interactions of NPs with different lipid groups that is also dependent on the monolayer fluidity. Furthermore, results show that the film stability upon compression is unaffected, but there are significant changes in the lateral domain organization of the lung surfactant upon NP addition. This research is significant in the field of pulmonary drug delivery. It is shown that NPs within a certain size range are safe for the pulmonary route, but little is known about the mode of interaction of those polymeric NPs. Moreover, this work will provide additional information about the nanotoxicology of NPs tested.

Keywords: Brewster angle microscopy, lipids, lung surfactant, nanoparticles

Procedia PDF Downloads 162
319 Influence of Kneading Conditions on the Textural Properties of Alumina Catalysts Supports for Hydrotreating

Authors: Lucie Speyer, Vincent Lecocq, Séverine Humbert, Antoine Hugon

Abstract:

Mesoporous alumina is commonly used as a catalyst support for the hydrotreating of heavy petroleum cuts. The process of fabrication usually involves: the synthesis of the boehmite AlOOH precursor, a kneading-extrusion step, and a calcination in order to obtain the final alumina extrudates. Alumina is described as a complex porous medium, generally agglomerates constituted of aggregated nanocrystallites. Its porous texture directly influences the active phase deposition and mass transfer, and the catalytic properties. Then, it is easy to figure out that each step of the fabrication of the supports has a role on the building of their porous network, and has to be well understood to optimize the process. The synthesis of boehmite by precipitation of aluminum salts was extensively studied in the literature and the effect of various parameters, such as temperature or pH, are known to influence the size and shape of the crystallites and the specific surface area of the support. The calcination step, through the topotactic transition from boehmite to alumina, determines the final properties of the support and can tune the surface area, pore volume and pore diameters from those of boehmite. However, the kneading extrusion step has been subject to a very few studies. It generally consists in two steps: an acid, then a basic kneading, where the boehmite powder is introduced in a mixer and successively added with an acid and a base solution to form an extrudable paste. During the acid kneading, the induced positive charges on the hydroxyl surface groups of boehmite create an electrostatic repulsion which tends to separate the aggregates and even, following the conditions, the crystallites. The basic kneading, by reducing the surface charges, leads to a flocculation phenomenon and can control the reforming of the overall structure. The separation and reassembling of the particles constituting the boehmite paste have a quite obvious influence on the textural properties of the material. In this work, we are focused on the influence of the kneading step on the alumina catalysts supports. Starting from an industrial boehmite, extrudates are prepared using various kneading conditions. The samples are studied by nitrogen physisorption in order to analyze the evolution of the textural properties, and by synchrotron small-angle X-ray scattering (SAXS), a more original method which brings information about agglomeration and aggregation of the samples. The coupling of physisorption and SAXS enables a precise description of the samples, as same as an accurate monitoring of their evolution as a function of the kneading conditions. These ones are found to have a strong influence of the pore volume and pore size distribution of the supports. A mechanism of evolution of the texture during the kneading step is proposed and could be attractive in order to optimize the texture of the supports and then, their catalytic performances.

Keywords: alumina catalyst support, kneading, nitrogen physisorption, small-angle X-ray scattering

Procedia PDF Downloads 236
318 A Comparison of Biosorption of Radionuclides Tl-201 on Different Biosorbents and Their Empirical Modelling

Authors: Sinan Yapici, Hayrettin Eroglu

Abstract:

The discharge of the aqueous radionuclides wastes used for the diagnoses of diseases and treatments of patients in nuclear medicine can cause fatal health problems when the radionuclides and its stable daughter component mix with underground water. Tl-201, which is one of the radionuclides commonly used in the nuclear medicine, is a toxic substance and is converted to its stable daughter component Hg-201, which is also a poisonous heavy metal: Tl201 → Hg201 + Gamma Ray [135-167 Kev (12%)] + X Ray [69-83 Kev (88%)]; t1/2 = 73,1 h. The purpose of the present work was to remove Tl-201 radionuclides from aqueous solution by biosorption on the solid bio wastes of food and cosmetic industry as bio sorbents of prina from an olive oil plant, rose residue from a rose oil plant and tea residue from a tea plant, and to make a comparison of the biosorption efficiencies. The effects of the biosorption temperature, initial pH of the aqueous solution, bio sorbent dose, particle size and stirring speed on the biosorption yield were investigated in a batch process. It was observed that the biosorption is a rapid process with an equilibrium time less than 10 minutes for all the bio sorbents. The efficiencies were found to be close to each other and measured maximum efficiencies were 93,30 percent for rose residue, 94,1 for prina and 98,4 for tea residue. In a temperature range of 283 and 313 K, the adsorption decreased with increasing temperature almost in a similar way. In a pH range of 2-10, increasing pH enhanced biosorption efficiency up to pH=7 and then the efficiency remained constant in a similar path for all the biosorbents. Increasing stirring speed from 360 to 720 rpm enhanced slightly the biosorption efficiency almost at the same ratio for all bio sorbents. Increasing particle size decreased the efficiency for all biosorbent; however the most negatively effected biosorbent was prina with a decrease in biosorption efficiency from about 84 percent to 40 with an increase in the nominal particle size 0,181 mm to 1,05 while the least effected one, tea residue, went down from about 97 percent to 87,5. The biosorption efficiencies of all the bio sorbents increased with increasing biosorbent dose in the range of 1,5 to 15,0 g/L in a similar manner. The fit of the experimental results to the adsorption isotherms proved that the biosorption process for all the bio sorbents can be represented best by Freundlich model. The kinetic analysis showed that all the processes fit very well to pseudo second order rate model. The thermodynamics calculations gave ∆G values between -8636 J mol-1 and -5378 for tea residue, -5313 and -3343 for rose residue, and -5701 and -3642 for prina with a ∆H values of -39516 J mol-1, -23660 and -26190, and ∆S values of -108.8 J mol-1 K-1, -64,0, -72,0 respectively, showing spontaneous and exothermic character of the processes. An empirical biosorption model in the following form was derived for each biosorbent as function of the parameters and time, taking into account the form of kinetic model, with regression coefficients over 0.9990 where At is biosorbtion efficiency at any time and Ae is the equilibrium efficiency, t is adsorption period as s, ko a constant, pH the initial acidity of biosorption medium, w the stirring speed as s-1, S the biosorbent dose as g L-1, D the particle size as m, and a, b, c, and e are the powers of the parameters, respectively, E a constant containing activation energy and T the temperature as K.

Keywords: radiation, diosorption, thallium, empirical modelling

Procedia PDF Downloads 244
317 Internet Memes as Meaning-Making Tools within Subcultures: A Case Study of Lolita Fashion

Authors: Victoria Esteves

Abstract:

Online memes have not only impacted different aspects of culture, but they have also left their mark on particular subcultures, where memes have reflected issues and debates surrounding specific spheres of interest. This is the first study that outlines how memes can address cultural intersections within the Lolita fashion community, which are much more specific and which fall outside of the broad focus of politics and/or social commentary. This is done by looking at the way online memes are used in this particular subculture as a form of meaning-making and group identity reinforcement, demonstrating not only the adaptability of online memes to specific cultural groups but also how subcultures tailor these digital objects to discuss both community-centered topics and more broad societal aspects. As part of an online ethnography, this study focuses on qualitative content analysis by taking a look at some of the meme communication that has permeated Lolita fashion communities. Examples of memes used in this context are picked apart in order to understand this specific layered phenomenon of communication, as well as to gain insights into how memes can operate as visual shorthand for the remix of meaning-making. There are existing parallels between internet culture and cultural behaviors surrounding Lolita fashion: not only is the latter strongly influenced by the former (due to its highly globalized dispersion and lack of physical shops, Lolita fashion is almost entirely reliant on the internet for its existence), both also emphasize curatorial roles through a careful collaborative process of documenting significant aspects of their culture (e.g., Know Your Meme and Lolibrary). Further similarities appear when looking at ideas of inclusion and exclusion that permeate both cultures, where memes and language are used in order to both solidify group identity and to police those who do not ascribe to these cultural tropes correctly, creating a feedback loop that reinforces subcultural ideals. Memes function as excellent forms of communication within the Lolita community because they reinforce its coded ideas and allows a kind of participation that echoes other cultural groups that are online-heavy such as fandoms. Furthermore, whilst the international Lolita community was mostly self-contained within its LiveJournal birthplace, it has become increasingly dispersed through an array of different social media groups that have fragmented this subculture significantly. The use of memes is key in maintaining a sense of connection throughout this now fragmentary experience of fashion. Memes are also used in the Lolita fashion community to bridge the gap between Lolita fashion related community issues and wider global topics; these reflect not only an ability to make use of a broader online language to address specific issues of the community (which in turn provide a very community-specific engagement with remix practices) but also memes’ ability to be tailored to accommodate overlapping cultural and political concerns and discussions between subcultures and broader societal groups. Ultimately, online memes provide the necessary elasticity to allow their adaption and adoption by subcultural groups, who in turn use memes to extend their meaning-making processes.

Keywords: internet culture, Lolita fashion, memes, online community, remix

Procedia PDF Downloads 154
316 Emotion Regulation and Executive Functioning Scale for Children and Adolescents (REMEX): Scale Development

Authors: Cristina Costescu, Carmen David, Adrian Roșan

Abstract:

Executive functions (EF) and emotion regulation strategies are processes that allow individuals to function in an adaptative way and to be goal-oriented, which is essential for success in daily living activities, at school, or in social contexts. The Emotion Regulation and Executive Functioning Scale for Children and Adolescents (REMEX) represents an empirically based tool (based on the model of EF developed by Diamond) for evaluating significant dimensions of child and adolescent EFs and emotion regulation strategies, mainly in school contexts. The instrument measures the following dimensions: working memory, inhibition, cognitive flexibility, executive attention, planning, emotional control, and emotion regulation strategies. Building the instrument involved not only a top-down process, as we selected the content in accordance with prominent models of FE, but also a bottom-up one, as we were able to identify valid contexts in which FE and ER are put to use. For the construction of the instrument, we implemented three focus groups with teachers and other professionals since the aim was to develop an accurate, objective, and ecological instrument. We used the focus group method in order to address each dimension and to yield a bank of items to be further tested. Each dimension is addressed through a task that the examiner will apply and through several items derived from the main task. For the validation of the instrument, we plan to use item response theory (IRT), also known as the latent response theory, that attempts to explain the relationship between latent traits (unobservable cognitive processes) and their manifestations (i.e., observed outcomes, responses, or performance). REMEX represents an ecological scale that integrates a current scientific understanding of emotion regulation and EF and is directly applicable to school contexts, and it can be very useful for developing intervention protocols. We plan to test his convergent validity with the Childhood Executive Functioning Inventory (CHEXI) and Emotion Dysregulation Inventory (EDI) and divergent validity between a group of typically developing children and children with neurodevelopmental disorders, aged between 6 and 9 years old. In a previous pilot study, we enrolled a sample of 40 children with autism spectrum disorders and attention-deficit/hyperactivity disorder aged 6 to 12 years old, and we applied the above-mentioned scales (CHEXI and EDI). Our results showed that deficits in planning, bebavior regulation, inhibition, and working memory predict high levels of emotional reactivity, leading to emotional and behavioural problems. Considering previous results, we expect our findings to provide support for the validity and reliability of the REMEX version as an ecological instrument for assessing emotion regulation and EF in children and for key features of its uses in intervention protocols.

Keywords: executive functions, emotion regulation, children, item response theory, focus group

Procedia PDF Downloads 84
315 Comparative Proteomic Profiling of Planktonic and Biofilms from Staphylococcus aureus Using Tandem Mass Tag-Based Mass Spectrometry

Authors: Arifur Rahman, Ardeshir Amirkhani, Honghua Hu, Mark Molloy, Karen Vickery

Abstract:

Introduction and Objectives: Staphylococcus aureus and coagulase-negative staphylococci comprises approximately 65% of infections associated with medical devices and are well known for their biofilm formatting ability. Biofilm-related infections are extremely difficult to eradicate owing to their high tolerance to antibiotics and host immune defences. Currently, there is no efficient method for early biofilm detection. A better understanding to enable detection of biofilm specific proteins in vitro and in vivo can be achieved by studying planktonic and different growth phases of biofilms using a proteome analysis approach. Our goal was to construct a reference map of planktonic and biofilm associated proteins of S. aureus. Methods: S. aureus reference strain (ATCC 25923) was used to grow 24 hours planktonic, 3-day wet biofilm (3DWB), and 12-day wet biofilm (12DWB). Bacteria were grown in tryptic soy broth (TSB) liquid medium. Planktonic growth was used late logarithmic bacteria, and the Centres for Disease Control (CDC) biofilm reactor was used to grow 3 days, and 12-day hydrated biofilms, respectively. Samples were subjected to reduction, alkylation and digestion steps prior to Multiplex labelling using Tandem Mass Tag (TMT) 10-plex reagent (Thermo Fisher Scientific). The labelled samples were pooled and fractionated by high pH RP-HPLC which followed by loading of the fractions on a nanoflow UPLC system (Eksigent UPLC system, AB SCIEX). Mass spectrometry (MS) data were collected on an Orbitrap Elite (Thermo Fisher Scientific) Mass Spectrometer. Protein identification and relative quantitation of protein levels were performed using Proteome Discoverer (version 1.3, Thermo Fisher Scientific). After the extraction of protein ratios with Proteome Discoverer, additional processing, and statistical analysis was done using the TMTPrePro R package. Results and Discussion: The present study showed that a considerable proteomic difference exists among planktonic and biofilms from S. aureus. We identified 1636 total extracellular secreted proteins, of which 350 and 137 proteins of 3DWB and 12DWB showed significant abundance variation from planktonic preparation, respectively. Of these, simultaneous up-regulation in between 3DWB and 12DWB proteins such as extracellular matrix-binding protein ebh, enolase, transketolase, triosephosphate isomerase, chaperonin, peptidase, pyruvate kinase, hydrolase, aminotransferase, ribosomal protein, acetyl-CoA acetyltransferase, DNA gyrase subunit A, glycine glycyltransferase and others we found in this biofilm producer. On the contrary, simultaneous down-regulation in between 3DWB and 12DWB proteins such as alpha and delta-hemolysin, lipoteichoic acid synthase, enterotoxin I, serine protease, lipase, clumping factor B, regulatory protein Spx, phosphoglucomutase, and others also we found in this biofilm producer. In addition, we also identified a big percentage of hypothetical proteins including unique proteins. Therefore, a comprehensive knowledge of planktonic and biofilm associated proteins identified by S. aureus will provide a basis for future studies on the development of vaccines and diagnostic biomarkers. Conclusions: In this study, we constructed an initial reference map of planktonic and various growth phase of biofilm associated proteins which might be helpful to diagnose biofilm associated infections.

Keywords: bacterial biofilms, CDC bioreactor, S. aureus, mass spectrometry, TMT

Procedia PDF Downloads 152
314 Piled Critical Size Bone-Biomimetic and Biominerizable Nanocomposites: Formation of Bioreactor-Induced Stem Cell Gradients under Perfusion and Compression

Authors: W. Baumgartner, M. Welti, N. Hild, S. C. Hess, W. J. Stark, G. Meier Bürgisser, P. Giovanoli, J. Buschmann

Abstract:

Perfusion bioreactors are used to solve problems in tissue engineering in terms of sufficient nutrient and oxygen supply. Such problems especially occur in critical size grafts because vascularization is often too slow after implantation ending up in necrotic cores. Biominerizable and biocompatible nanocomposite materials are attractive and suitable scaffold materials for bone tissue engineering because they offer mineral components in organic carriers – mimicking natural bone tissue. In addition, human adipose derived stem cells (ASCs) can potentially be used to increase bone healing as they are capable of differentiating towards osteoblasts or endothelial cells among others. In the present study, electrospun nanocomposite disks of poly-lactic-co-glycolic acid and amorphous calcium phosphate nanoparticles (PLGA/a-CaP) were seeded with human ASCs and eight disks were stacked in a bioreactor running with normal culture medium (no differentiation supplements). Under continuous perfusion and uniaxial cyclic compression, load-displacement curves as a function of time were assessed. Stiffness and energy dissipation were recorded. Moreover, stem cell densities in the layers of the piled scaffold were determined as well as their morphologies and differentiation status (endothelial cell differentiation, chondrogenesis and osteogenesis). While the stiffness of the cell free constructs increased over time caused by the transformation of the a-CaP nanoparticles into flake-like apatite, ASC-seeded constructs showed a constant stiffness. Stem cell density gradients were histologically determined with a linear increase in the flow direction from the bottom to the top of the 3.5 mm high pile (r2 > 0.95). Cell morphology was influenced by the flow rate, with stem cells getting more roundish at higher flow rates. Less than 1 % osteogenesis was found upon osteopontin immunostaining at the end of the experiment (9 days), while no endothelial cell differentiation and no chondrogenesis was triggered under these conditions. All ASCs had mainly remained in their original pluripotent status within this time frame. In summary, we have fabricated a critical size bone graft based on a biominerizable bone-biomimetic nanocomposite with preserved stiffness when seeded with human ASCs. The special feature of this bone graft was that ASC densities inside the piled construct varied with a linear gradient, which is a good starting point for tissue engineering interfaces such as bone-cartilage where the bone tissue is cell rich while the cartilage exhibits low cell densities. As such, this tissue-engineered graft may act as a bone-cartilage interface after the corresponding differentiation of the ASCs.

Keywords: bioreactor, bone, cartilage, nanocomposite, stem cell gradient

Procedia PDF Downloads 289
313 Design, Development and Testing of Polymer-Glass Microfluidic Chips for Electrophoretic Analysis of Biological Sample

Authors: Yana Posmitnaya, Galina Rudnitskaya, Tatyana Lukashenko, Anton Bukatin, Anatoly Evstrapov

Abstract:

An important area of biological and medical research is the study of genetic mutations and polymorphisms that can alter gene function and cause inherited diseases and other diseases. The following methods to analyse DNA fragments are used: capillary electrophoresis and electrophoresis on microfluidic chip (MFC), mass spectrometry with electrophoresis on MFC, hybridization assay on microarray. Electrophoresis on MFC allows to analyse small volumes of samples with high speed and throughput. A soft lithography in polydimethylsiloxane (PDMS) was chosen for operative fabrication of MFCs. A master-form from silicon and photoresist SU-8 2025 (MicroChem Corp.) was created for the formation of micro-sized structures in PDMS. A universal topology which combines T-injector and simple cross was selected for the electrophoretic separation of the sample. Glass K8 and PDMS Sylgard® 184 (Dow Corning Corp.) were used for fabrication of MFCs. Electroosmotic flow (EOF) plays an important role in the electrophoretic separation of the sample. Therefore, the estimate of the quantity of EOF and the ways of its regulation are of interest for the development of the new methods of the electrophoretic separation of biomolecules. The following methods of surface modification were chosen to change EOF: high-frequency (13.56 MHz) plasma treatment in oxygen and argon at low pressure (1 mbar); 1% aqueous solution of polyvinyl alcohol; 3% aqueous solution of Kolliphor® P 188 (Sigma-Aldrich Corp.). The electroosmotic mobility was evaluated by the method of Huang X. et al., wherein the borate buffer was used. The influence of physical and chemical methods of treatment on the wetting properties of the PDMS surface was controlled by the sessile drop method. The most effective way of surface modification of MFCs, from the standpoint of obtaining the smallest value of the contact angle and the smallest value of the EOF, was the processing with aqueous solution of Kolliphor® P 188. This method of modification has been selected for the treatment of channels of MFCs, which are used for the separation of mixture of oligonucleotides fluorescently labeled with the length of chain with 10, 20, 30, 40 and 50 nucleotides. Electrophoresis was performed on the device MFAS-01 (IAI RAS, Russia) at the separation voltage of 1500 V. 6% solution of polydimethylacrylamide with the addition of 7M carbamide was used as the separation medium. The separation time of components of the mixture was determined from electropherograms. The time for untreated MFC was ~275 s, and for the ones treated with solution of Kolliphor® P 188 – ~ 220 s. Research of physical-chemical methods of surface modification of MFCs allowed to choose the most effective way for reducing EOF – the modification with aqueous solution of Kolliphor® P 188. In this case, the separation time of the mixture of oligonucleotides decreased about 20%. The further optimization of method of modification of channels of MFCs will allow decreasing the separation time of sample and increasing the throughput of analysis.

Keywords: electrophoresis, microfluidic chip, modification, nucleic acid, polydimethylsiloxane, soft lithography

Procedia PDF Downloads 392
312 The Power of in situ Characterization Techniques in Heterogeneous Catalysis: A Case Study of Deacon Reaction

Authors: Ramzi Farra, Detre Teschner, Marc Willinger, Robert Schlögl

Abstract:

Introduction: The conventional approach of characterizing solid catalysts under static conditions, i.e., before and after reaction, does not provide sufficient knowledge on the physicochemical processes occurring under dynamic conditions at the molecular level. Hence, the necessity of improving new in situ characterizing techniques with the potential of being used under real catalytic reaction conditions is highly desirable. In situ Prompt Gamma Activation Analysis (PGAA) is a rapidly developing chemical analytical technique that enables us experimentally to assess the coverage of surface species under catalytic turnover and correlate these with the reactivity. The catalytic HCl oxidation (Deacon reaction) over bulk ceria will serve as our example. Furthermore, the in situ Transmission Electron Microscopy is a powerful technique that can contribute to the study of atmosphere and temperature induced morphological or compositional changes of a catalyst at atomic resolution. The application of such techniques (PGAA and TEM) will pave the way to a greater and deeper understanding of the dynamic nature of active catalysts. Experimental/Methodology: In situ Prompt Gamma Activation Analysis (PGAA) experiments were carried out to determine the Cl uptake and the degree of surface chlorination under reaction conditions by varying p(O2), p(HCl), p(Cl2), and the reaction temperature. The abundance and dynamic evolution of OH groups on working catalyst under various steady-state conditions were studied by means of in situ FTIR with a specially designed homemade transmission cell. For real in situ TEM we use a commercial in situ holder with a home built gas feeding system and gas analytics. Conclusions: Two complimentary in situ techniques, namely in situ PGAA and in situ FTIR were utilities to investigate the surface coverage of the two most abundant species (Cl and OH). The OH density and Cl uptake were followed under multiple steady-state conditions as a function of p(O2), p(HCl), p(Cl2), and temperature. These experiments have shown that, the OH density positively correlates with the reactivity whereas Cl negatively. The p(HCl) experiments give rise to increased activity accompanied by Cl-coverage increase (opposite trend to p(O2) and T). Cl2 strongly inhibits the reaction, but no measurable increase of the Cl uptake was found. After considering all previous observations we conclude that only a minority of the available adsorption sites contribute to the reactivity. In addition, the mechanism of the catalysed reaction was proposed. The chlorine-oxygen competition for the available active sites renders re-oxidation as the rate-determining step of the catalysed reaction. Further investigations using in situ TEM are planned and will be conducted in the near future. Such experiments allow us to monitor active catalysts at the atomic scale under the most realistic conditions of temperature and pressure. The talk will shed a light on the potential and limitations of in situ PGAA and in situ TEM in the study of catalyst dynamics.

Keywords: CeO2, deacon process, in situ PGAA, in situ TEM, in situ FTIR

Procedia PDF Downloads 269
311 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 268
310 A Qualitative Study Investigating the Relationship Between External Context and the Mechanism of Change for the Implementation of Goal-oriented Primary Care

Authors: Ine Huybrechts, Anja Declercq, Emily Verté, Peter Raeymaeckers, Sibyl Anthierens

Abstract:

Goal-oriented care is a concept gaining increased interest as an approach to go towards more coordinated and integrated primary care. It places patients’ personal life goals at the core of health care support, hereby shifting the focus from “what’s the matter with this patient” to “what matters to this patient.” In Flanders/Belgium, various primary care providers, health and social care organizations and governmental bodies have picked up this concept and have initiated actions to facilitate this approach. The implementation of goal-oriented care not only happens on the micro-level, but it also requires efforts on the meso- and macro-level. Within implementation research, there is a growing recognition that the context in which an intervention takes place strongly relates to its implementation outcomes. However, when investigating contextual variables, the external context and its impact on implementation processes is often overlooked. This study aims to explore how we can better identify and understand the external context and how it relates to the mechanism of change within the implementation process of goal-oriented care in Flanders/Belgium. Results can be used to support and guide initiatives to introduce innovative approaches such as goal-oriented care inside an organization or in the broader primary care landscape. We have conducted qualitative research, performing in-depth interviews with n=23 respondents who have affinity with the implementation of goal-oriented care within their professional function. This lead to in-depth insights from a wide range of actors, with meso-level and/or macro-level perspectives on the implementation of goal-oriented care. This means that we have interviewed actors that are not only involved with initiatives to implement goal-oriented care, but also actors that actively give form to the external context in which goal-oriented care is implemented. Data were collected using a semi-structured interview guide, audio recorded, and analyzed first inductively and then deductively using various theories and concepts that derive from organizational research. Our preliminary findings suggest t Our findings can contribute to further define actions needed for sustainable implementation of goal-oriented primary care. It gives insights in the dynamics between contextual variables and implementation efforts, hereby indicating towards those contextual variables that can be further shaped to facilitate the implementation of an innovation such as goal-oriented care. hat organizational theories can help understand the mechanism of change of implementation processes with a macro-level perspective. Institutional theories, contingency theories, resources dependency theories and others can expose the mechanism of change for an innovation such as goal-oriented care. Our findings can contribute to further define actions needed for sustainable implementation of goal-oriented primary care. It gives insights in the dynamics between contextual variables and implementation efforts, hereby indicating towards those contextual variables that can be further shaped to facilitate the implementation of an innovation such as goal-oriented care.

Keywords: goal-oriented care, implementation processes, organizational theories, person-centered care, implementation research

Procedia PDF Downloads 62
309 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study

Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari

Abstract:

The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two well known scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a case-study. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means of TRNSYS, which allows to simulate the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With TRNSYS it is possible to obtain quite accurate and reliable results, that allow to identify effective combinations building-HVAC system. The second step has consisted of using output data obtained with TRNSYS as input to the calculation model RETScreen, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing to determine the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while RETScreen provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model RETScreen for different design options. For example, the analysis performed on the building, taken as a case study, found that the most suitable plant solution, taking into account technical, economic and environmental aspects, is the one based on a CCHP system (Combined Cooling, Heating, and Power) using an internal combustion engine.

Keywords: energy, system, building, cooling, electrical

Procedia PDF Downloads 559
308 Functional Switching of Serratia marcescens Transcriptional Regulator from Activator to Inhibitor of Quorum Sensing by Exogenous Addition

Authors: Norihiro Kato, Yuriko Takayama

Abstract:

Some gram-negative bacteria enable the simultaneous activation of gene expression involved in N-acylhomoserine lactone (AHL) dependent cell-to-cell communication system. Such regulatory system for the bacterial group behavior is termed as quorum sensing (QS) because a diffusible AHL signal can accumulate around the cell during the increase of the cell density and trigger activation of the sequential QS process. By blocking the QS, the expression of diverse genes related to infection, antibiotic production, and biofilm formation is inhibited. Conditioning of QS by regulation of the DNA-receptor-AHL interaction is a potential target for enhancing host defenses against pathogenicity. We focused on engineered application of transcriptional regulator SpnR produced in opportunistic human pathogen Serratia marcescens. The SpnR can interact with AHL signals at an N-terminal domain and also with a promoter region of a QS target gene at a C-terminal domain. As the initial process of the QS activation, the SpnR forms a complex with the AHL to enhance the expression of pig cluster; the SpnR normally acts as an activator for the expression of the QS-dependent gene. In this research, we attempt to artificially control QS by changing the role of SpnR. The QS-dependent prodigiosin production is expected to inhibit by externally added SpnR in the culture broth of AS-1 strain because the AHL concentration was kept below the threshold by AHL-SpnR complex formation. Maltose-binding protein (MBP)-tagged SpnR (MBP-SpnR) was overexpressed in Escherichia coli and purified using an affinity chromatography equipped with an amylose resin column. The specific interaction between AHL and MBP-SpnR was demonstrated by quartz crystal microbalance (QCM) sensor. AHL with amino end-group was coupled with COOH-terminated self-assembled monolayer prepared on a gold electrode of 27-MHz quartz crystal sensor using water-soluble carbodiimide. After the injection of MBP-SpnR into a cup-type sensor cell filled with the buffer solution, time course of resonant frequency change (ΔFs) was determined. A decrease of ΔFs clearly showed the uptake of MBP-SpnR onto the AHL-immobilized electrode. Furthermore, no binding affinity was observed after the heat-inactivation of MBP-SpnR at 80ºC. These results suggest that MBP-SpnR possesses a specific affinity for AHL. MBP-SpnR was added to the culture medium as an AHL trap to study inhibitory effects on intracellularly accumulated prodigiosin. With approximately 2 µM MBP-SpnR, the amount of prodigiosin induced was half that of the control without any additives. In conclusion, the function of SpnR could be switched by adding it to the cell culture. Exogenously added MBP-SpnR possesses high affinity for AHL derived from cells and acts as an inhibitor of AHL-mediated QS.

Keywords: intracellular signaling, microbial biotechnology, quorum sensing, transcriptional regulator

Procedia PDF Downloads 248
307 The Joy of Painless Maternity: The Reproductive Policy of the Bolsheviks in the 1930s

Authors: Almira Sharafeeva

Abstract:

In the Soviet Union of the 1930s, motherhood was seen as a natural need of women. The masculine Bolshevik state did not see the emancipated woman as free from her maternal burden. In order to support the idea of "joyful motherhood," a medical discourse on the anesthesia of childbirth emerges. In March 1935 at the IX Congress of obstetricians and gynecologists the People's Commissar of Public Health of the RSFSR G.N. Kaminsky raised the issue of anesthesia of childbirth. It was also from that year that medical, literary and artistic editions with enviable frequency began to publish articles, studies devoted to the issue, the goal - to anesthetize all childbirths in the USSR - was proclaimed. These publications were often filled with anti-German and anti-capitalist propaganda, through which the advantages of socialism over Capitalism and Nazism were demonstrated. At congresses, in journals, and at institute meetings, doctors' discussions around obstetric anesthesia were accompanied by discussions of shortening the duration of the childbirth process, the prevention and prevention of disease, the admission of nurses to the procedure, and the proper behavior of women during the childbirth process. With the help of articles from medical periodicals of the 1930s., brochures, as well as documents from the funds of the Institute of Obstetrics and Gynecology of the Academy of Medical Sciences of the USSR (TsGANTD SPb) and the Department of Obstetrics and Gynecology of the NKZ USSR (GARF) in this paper we will show, how the advantages of the Soviet system and the socialist way of life were constructed through the problem of childbirth pain relief, and we will also show how childbirth pain relief in the USSR was related to the foreign policy situation and how projects of labor pain relief were related to the anti-abortion policy of the state. This study also attempts to answer the question of why anesthesia of childbirth in the USSR did not become widespread and how, through this medical procedure, the Soviet authorities tried to take control of a female function (childbirth) that was not available to men. Considering this subject from the perspective of gender studies and the social history of medicine, it is productive to use the term "biopolitics. Michel Foucault and Antonio Negri, wrote that biopolitics takes under its wing the control and management of hygiene, nutrition, fertility, sexuality, contraception. The central issue of biopolitics is population reproduction. It includes strategies for intervening in collective existence in the name of life and health, ways of subjectivation by which individuals are forced to work on themselves. The Soviet state, through intervention in the reproductive lives of its citizens, sought to realize its goals of population growth, which was necessary to demonstrate the benefits of living in the Soviet Union and to train a pool of builders of socialism. The woman's body was seen as the object over which the socialist experiment of reproductive policy was being conducted.

Keywords: labor anesthesia, biopolitics of stalinism, childbirth pain relief, reproductive policy

Procedia PDF Downloads 52
306 Identification of Text Domains and Register Variation through the Analysis of Lexical Distribution in a Bangla Mass Media Text Corpus

Authors: Mahul Bhattacharyya, Niladri Sekhar Dash

Abstract:

The present research paper is an experimental attempt to investigate the nature of variation in the register in three major text domains, namely, social, cultural, and political texts collected from the corpus of Bangla printed mass media texts. This present study uses a corpus of a moderate amount of Bangla mass media text that contains nearly one million words collected from different media sources like newspapers, magazines, advertisements, periodicals, etc. The analysis of corpus data reveals that each text has certain lexical properties that not only control their identity but also mark their uniqueness across the domains. At first, the subject domains of the texts are classified into two parameters namely, ‘Genre' and 'Text Type'. Next, some empirical investigations are made to understand how the domains vary from each other in terms of lexical properties like both function and content words. Here the method of comparative-cum-contrastive matching of lexical load across domains is invoked through word frequency count to track how domain-specific words and terms may be marked as decisive indicators in the act of specifying the textual contexts and subject domains. The study shows that the common lexical stock that percolates across all text domains are quite dicey in nature as their lexicological identity does not have any bearing in the act of specifying subject domains. Therefore, it becomes necessary for language users to anchor upon certain domain-specific lexical items to recognize a text that belongs to a specific text domain. The eventual findings of this study confirm that texts belonging to different subject domains in Bangla news text corpus clearly differ on the parameters of lexical load, lexical choice, lexical clustering, lexical collocation. In fact, based on these parameters, along with some statistical calculations, it is possible to classify mass media texts into different types to mark their relation with regard to the domains they should actually belong. The advantage of this analysis lies in the proper identification of the linguistic factors which will give language users a better insight into the method they employ in text comprehension, as well as construct a systemic frame for designing text identification strategy for language learners. The availability of huge amount of Bangla media text data is useful for achieving accurate conclusions with a certain amount of reliability and authenticity. This kind of corpus-based analysis is quite relevant for a resource-poor language like Bangla, as no attempt has ever been made to understand how the structure and texture of Bangla mass media texts vary due to certain linguistic and extra-linguistic constraints that are actively operational to specific text domains. Since mass media language is assumed to be the most 'recent representation' of the actual use of the language, this study is expected to show how the Bangla news texts reflect the thoughts of the society and how they leave a strong impact on the thought process of the speech community.

Keywords: Bangla, corpus, discourse, domains, lexical choice, mass media, register, variation

Procedia PDF Downloads 159
305 Ascidian Styela rustica Proteins’ Structural Domains Predicted to Participate in the Tunic Formation

Authors: M. I. Tyletc, O. I. Podgornya, T. G. Shaposhnikova, S. V. Shabelnikov, A. G. Mittenberg, M. A. Daugavet

Abstract:

Ascidiacea is the most numerous class of the Tunicata subtype. These chordates' distinctive feature of the anatomical structure is a tunic consisting of cellulose fibrils, protein molecules, and single cells. The mechanisms of the tunic formation are not known in detail; tunic formation could be used as the model system for studying the interaction of cells with the extracellular matrix. Our model species is the ascidian Styela rustica, which is prevalent in benthic communities of the White Sea. As previously shown, the tunic formation involves morula blood cells, which contain the major 48 kDa protein p48. P48 participation in the tunic formation was proved using antibodies against the protein. The nature of the protein and its function remains unknown. The current research aims to determine the amino acid sequence of p48, as well as to clarify its role in the tunic formation. The peptides that make up the p48 amino acid sequence were determined by mass spectrometry. A search for peptides in protein sequence databases identified sequences homologous to p48 in Styela clava, Styela plicata, and Styela canopus. Based on sequence alignment, their level of similarity was determined as 81-87%. The correspondent sequence of ascidian Styela canopus was used for further analysis. The Styela rustica p48 sequence begins with a signal peptide, which could indicate that the protein is secretory. This is consistent with experimentally obtained data: the contents of morula cells secreted in the tunic matrix. The isoelectric point of p48 is 9.77, which is consistent with the experimental results of acid electrophoresis of morula cell proteins. However, the molecular weight of the amino acid sequence of ascidian Styela canopus is 103 kDa, so p48 of Styela rustica is a shorter homolog. The search for conservative functional domains revealed the presence of two Ca-binding EGF-like domains, thrombospondin (TSP1) and tyrosinase domains. The p48 peptides determined by mass spectrometry fall into the region of the sequence corresponding to the last two domains and have amino acid substitutions as compared to Styela canopus homolog. The tyrosinase domain (pfam00264) is known to be part of the phenoloxidase enzyme, which participates in melanization processes and the immune response. The thrombospondin domain (smart00209) interacts with a wide range of proteins, and is involved in several biological processes, including coagulation, cell adhesion, modulation of intercellular and cell-matrix interactions, angiogenesis, wound healing and tissue remodeling. It can be assumed that the tyrosinase domain in p48 plays the role of the phenoloxidase enzyme, and TSP1 provides a link between the extracellular matrix and cell surface receptors, and may also be responsible for the repair of the tunic. The results obtained are consistent with experimental data on p48. The domain organization of protein suggests that p48 is an enzyme involved in the tunic tunning and is an important regulator of the organization of the extracellular matrix.

Keywords: ascidian, p48, thrombospondin, tyrosinase, tunic, tunning

Procedia PDF Downloads 91
304 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions

Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos

Abstract:

The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).

Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon

Procedia PDF Downloads 120
303 The Dynamics of a Droplet Spreading on a Steel Surface

Authors: Evgeniya Orlova, Dmitriy Feoktistov, Geniy Kuznetsov

Abstract:

Spreading of a droplet over a solid substrate is a key phenomenon observed in the following engineering applications: thin film coating, oil extraction, inkjet printing, and spray cooling of heated surfaces. Droplet cooling systems are known to be more effective than film or rivulet cooling systems. It is caused by the greater evaporation surface area of droplets compared with the film of the same mass and wetting surface. And the greater surface area of droplets is connected with the curvature of the interface. Location of the droplets on the cooling surface influences on the heat transfer conditions. The close distance between the droplets provides intensive heat removal, but there is a possibility of their coalescence in the liquid film. The long distance leads to overheating of the local areas of the cooling surface and the occurrence of thermal stresses. To control the location of droplets is possible by changing the roughness, structure and chemical composition of the surface. Thus, control of spreading can be implemented. The most important characteristic of spreading of droplets on solid surfaces is a dynamic contact angle, which is a function of the contact line speed or capillary number. However, there is currently no universal equation, which would describe the relationship between these parameters. This paper presents the results of the experimental studies of water droplet spreading on metal substrates with different surface roughness. The effect of the droplet growth rate and the surface roughness on spreading characteristics was studied at low capillary numbers. The shadow method using high speed video cameras recording up to 10,000 frames per seconds was implemented. A droplet profile was analyzed by Axisymmetric Drop Shape Analyses techniques. According to change of the dynamic contact angle and the contact line speed three sequential spreading stages were observed: rapid increase in the dynamic contact angle; monotonous decrease in the contact angle and the contact line speed; and form of the equilibrium contact angle at constant contact line. At low droplet growth rate, the dynamic contact angle of the droplet spreading on the surfaces with the maximum roughness is found to increase throughout the spreading time. It is due to the fact that the friction force on such surfaces is significantly greater than the inertia force; and the contact line is pinned on microasperities of a relief. At high droplet growth rate the contact angle decreases during the second stage even on the surfaces with the maximum roughness, as in this case, the liquid does not fill the microcavities, and the droplet moves over the “air cushion”, i.e. the interface is a liquid/gas/solid system. Also at such growth rates pulsation of liquid flow was detected; and the droplet oscillates during the spreading. Thus, obtained results allow to conclude that it is possible to control spreading by using the surface roughness and the growth rate of droplets on surfaces as varied factors. Also, the research findings may be used for analyzing heat transfer in rivulet and drop cooling systems of high energy equipment.

Keywords: contact line speed, droplet growth rate, dynamic contact angle, shadow system, spreading

Procedia PDF Downloads 308
302 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations

Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai

Abstract:

Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.

Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile

Procedia PDF Downloads 125