Search results for: ReSPECT
164 Development of a Risk Governance Index and Examination of Its Determinants: An Empirical Study in Indian Context
Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav
Abstract:
Risk management has been gaining extensive focus from international organizations like Committee of Sponsoring Organizations and Financial Stability Board, and, the foundation of such an effective and efficient risk management system lies in a strong risk governance structure. In view of this, an attempt (perhaps a first of its kind) has been made to develop a risk governance index, which could be used as proxy for quality of risk governance structures. The index (normative framework) is based on eleven variables, namely, size of board, board diversity in terms of gender, proportion of executive directors, executive/non-executive status of chairperson, proportion of independent directors, CEO duality, chief risk officer (CRO), risk management committee, mandatory committees, voluntary committees and existence/non-existence of whistle blower policy. These variables are scored on a scale of 1 to 5 with the exception of the variables, namely, status of chairperson and CEO duality (which have been scored on a dichotomous scale with the score of 3 or 5). In case there is a legal/statutory requirement in respect of above-mentioned variables and there is a non-compliance with such requirement a score of one has been envisaged. Though there is no legal requirement, for the larger part of study, in context of CRO, risk management committee and whistle blower policy, still a score of 1 has been assigned in the event of their non-existence. Recognizing the importance of these variables in context of risk governance structure and the fact that the study basically focuses on risk governance, the absence of these variables has been equated to non-compliance with a legal/statutory requirement. Therefore, based on this the minimum score is 15 and the maximum possible is 55. In addition, an attempt has been made to explore the determinants of this index. For this purpose, the sample consists of non-financial companies (429) that constitute S&P CNX500 index. The study covers a 10 years period from April 1, 2005 to March 31, 2015. Given the panel nature of data, Hausman test was applied, and it suggested that fixed effects regression would be appropriate. The results indicate that age and size of firms have significant positive impact on its risk governance structures. Further, post-recession period (2009-2015) has witnessed significant improvement in quality of governance structures. In contrast, profitability (positive relationship), leverage (negative relationship) and growth (negative relationship) do not have significant impact on quality of risk governance structures. The value of rho indicates that about 77.74% variation in risk governance structures is due to firm specific factors. Given the fact that each firm is unique in terms of its risk exposure, risk culture, risk appetite, and risk tolerance levels, it appears reasonable to assume that the specific conditions and circumstances that a company is beset with, could be the biggest determinants of its risk governance structures. Given the recommendations put forth in the paper (particularly for regulators and companies), the study is expected to be of immense utility in an important yet neglected aspect of risk management.Keywords: corporate governance, ERM, risk governance, risk management
Procedia PDF Downloads 253163 The Efficiency of Mechanization in Weed Control in Artificial Regeneration of Oriental Beech (Fagus orientalis Lipsky.)
Authors: Tuğrul Varol, Halil Barış Özel
Abstract:
In this study which has been conducted in Akçasu Forest Range District of Devrek Forest Directorate; 3 methods (cover removal with human force, cover removal with Hitachi F20 Excavator, and cover removal with agricultural equipment mounted on a Ferguson 240S agriculture tractor) utilized in weed control efforts in regeneration of degraded oriental beech forests have been compared. In this respect, 3 methods have been compared by determining certain work hours and standard durations of unit areas (1 hectare). For this purpose, evaluating the tasks made with human and machine force from the aspects of duration, productivity and costs, it has been aimed to determine the most productive method in accordance with the actual ecological conditions of research field. Within the scope of the study, the time studies have been conducted for 3 methods used in weed control efforts. While carrying out those studies, the performed implementations have been evaluated by dividing them into business stages. Also, the actual data have been used while calculating the cost accounts. In those calculations, the latest formulas and equations which are also used in developed countries have been utilized. The variance of analysis (ANOVA) was used in order to determine whether there is any statistically significant difference among obtained results, and the Duncan test was used for grouping if there is significant difference. According to the measurements and findings carried out within the scope of this study, it has been found during living cover removal efforts in regeneration efforts in demolished oriental beech forests that the removal of weed layer in 1 hectare of field has taken 920 hours with human force, 15.1 hours with excavator and 60 hours with an equipment mounted on a tractor. On the other hand, it has been determined that the cost of removal of living cover in unit area (1 hectare) was 3220.00 TL for man power, 788.70 TL for excavator and 2227.20 TL for equipment mounted on a tractor. According to the obtained results, it has been found that the utilization of excavator in weed control effort in regeneration of degraded oriental beech regions under actual ecological conditions of research field has been found to be more productive from both of aspects of duration and costs. These determinations carried out should be repeated in weed control efforts in degraded forest fields with different ecological conditions, it is compulsory for finding the most efficient weed control method. These findings will light the way of technical staff of forestry directorate in determination of the most effective and economic weed contol method. Thus, the more actual data will be used while preparing the weed control budgets, and there will be significant contributions to national economy. Also the results of this and similar studies are very important for developing the policies for our forestry in short and long term.Keywords: artificial regeneration, weed control, oriental beech, productivity, mechanization, man power, cost analysis
Procedia PDF Downloads 418162 Analysis of Long-Term Response of Seawater to Change in CO₂, Heavy Metals and Nutrients Concentrations
Authors: Igor Povar, Catherine Goyet
Abstract:
The seawater is subject to multiple external stressors (ES) including rising atmospheric CO2 and ocean acidification, global warming, atmospheric deposition of pollutants and eutrophication, which deeply alter its chemistry, often on a global scale and, in some cases, at the degree significantly exceeding that in the historical and recent geological verification. In ocean systems the micro- and macronutrients, heavy metals, phosphor- and nitrogen-containing components exist in different forms depending on the concentrations of various other species, organic matter, the types of minerals, the pH etc. The major limitation to assessing more strictly the ES to oceans, such as pollutants (atmospheric greenhouse gas, heavy metals, nutrients as nitrates and phosphates) is the lack of theoretical approach which could predict the ocean resistance to multiple external stressors. In order to assess the abovementioned ES, the research has applied and developed the buffer theory approach and theoretical expressions of the formal chemical thermodynamics to ocean systems, as heterogeneous aqueous systems. The thermodynamic expressions of complex chemical equilibria, involving acid-base, complex formation and mineral ones have been deduced. This thermodynamic approach utilizes thermodynamic relationships coupled with original mass balance constraints, where the solid phases are explicitly expressed. The ocean sensitivity to different external stressors and changes in driving factors are considered in terms of derived buffering capacities or buffer factors for heterogeneous systems. Our investigations have proved that the heterogeneous aqueous systems, as ocean and seas are, manifest their buffer properties towards all their components, not only to pH, as it has been known so far, for example in respect to carbon dioxide, carbonates, phosphates, Ca2+, Mg2+, heavy metal ions etc. The derived expressions make possible to attribute changes in chemical ocean composition to different pollutants. These expressions are also useful for improving the current atmosphere-ocean-marine biogeochemistry models. The major research questions, to which the research responds, are: (i.) What kind of contamination is the most harmful for Future Ocean? (ii.) What are chemical heterogeneous processes of the heavy metal release from sediments and minerals and its impact to the ocean buffer action? (iii.) What will be the long-term response of the coastal ocean to the oceanic uptake of anthropogenic pollutants? (iv.) How will change the ocean resistance in terms of future chemical complex processes and buffer capacities and its response to external (anthropogenic) perturbations? The ocean buffer capacities towards its main components are recommended as parameters that should be included in determining the most important ocean factors which define the response of ocean environment at the technogenic loads increasing. The deduced thermodynamic expressions are valid for any combination of chemical composition, or any of the species contributing to the total concentration, as independent state variable.Keywords: atmospheric greenhouse gas, chemical thermodynamics, external stressors, pollutants, seawater
Procedia PDF Downloads 146161 Small and Medium-Sized Enterprises, Flash Flooding and Organisational Resilience Capacity: Qualitative Findings on Implications of the Catastrophic 2017 Flash Flood Event in Mandra, Greece
Authors: Antonis Skouloudis, Georgios Deligiannakis, Panagiotis Vouros, Konstantinos Evangelinos, Loannis Nikolaou
Abstract:
On November 15th, 2017, a catastrophic flash flood devastated the city of Mandra in Central Greece, resulting in 24 fatalities and extensive damages to the built environment and infrastructure. It was Greece's deadliest and most destructive flood event for the past 40 years. In this paper, we examine the consequences of this event too small and medium-sized enterprises (SMEs) operating in Mandra during the flood event, which were affected by the floodwaters to varying extents. In this context, we conducted semi-structured interviews with business owners-managers of 45 SMEs located in flood inundated areas and are still active nowadays, based on an interview guide that spanned 27 topics. The topics pertained to the disaster experience of the business and business owners-managers, knowledge and attitudes towards climate change and extreme weather, aspects of disaster preparedness and related assistance needs. Our findings reveal that the vast majority of the affected businesses experienced heavy damages in equipment and infrastructure or total destruction, which resulted in business interruption from several weeks up to several months. Assistance from relatives or friends helped for the damage repairs and business recovery, while state compensations were deemed insufficient compared to the extent of the damages. Most interviewees pinpoint flooding as one of the most critical risks, and many connect it with the climate crisis. However, they are either not willing or unable to apply property-level prevention measures in their businesses due to cost considerations or complex and cumbersome bureaucratic processes. In all cases, the business owners are fully aware of the flood hazard implications, and since the recovery from the event, they have engaged in basic mitigation measures and contingency plans in case of future flood events. Such plans include insurance contracts whenever possible (as the vast majority of the affected SMEs were uninsured at the time of the 2017 event) as well as simple relocations of critical equipment within their property. The study offers fruitful insights on latent drivers and barriers of SMEs' resilience capacity to flash flooding. In this respect, findings such as ours, highlighting tensions that underpin behavioral responses and experiences, can feed into a) bottom-up approaches for devising actionable and practical guidelines, manuals and/or standards on business preparedness to flooding, and, ultimately, b) policy-making for an enabling environment towards a flood-resilient SME sector.Keywords: flash flood, small and medium-sized enterprises, organizational resilience capacity, disaster preparedness, qualitative study
Procedia PDF Downloads 133160 Real-Time Neuroimaging for Rehabilitation of Stroke Patients
Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge
Abstract:
Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation
Procedia PDF Downloads 388159 Training in Communicational Skills in Students of Medicine: Differences in Bilingualism
Authors: Naiara Ozamiz Etcebarria, Sonia Ruiz De Azua Garcia, Agurtzane Ortiz Jauregi, Virginia Guillen Cañas
Abstract:
Introduction: The most relevant competencies of a health professional are an adequate communication capacity, which will influence the satisfaction of professionals and patients, therapeutic compliance, conflict prevention, clinical outcomes´ improvement and efficiency of health services. The ability of Active listening , empathy, assertiveness and social skills, are important abilities to develop in all professions in which there is a relationship with other people. In the field of health, it is even more important to have adequate qualities so that the treatment with the patient will be adequate and satisfactory. We conducted a research with students of third year in the Degree of Medicine with the objectives: - to know how the active listening, empathy, assertiveness and social skills of students are. - to know if there are differences according to different demographic variables, such as sex, language, age, number of siblings and interest in the subject. Material and Methods: The students of the Third year in the Degree of Medicine (N = 212) participated voluntarily. Sociodemographic data were collected. Descriptive and comparative analysis of the averages of the students with respect to active listening, empathy, assertiveness and social skills were performed. Once the questionnaires were collected, they were entered into the SPSS 21 database. Four communicational aspects were evaluated: The active listening questionnaire, the TECA empathy questionnaire, the ACDA questionnaire and the EHS questionnaire Social Skills Scale. The active listening questionnaire assesses these factors: Listening without interruption and less contradiction, Listening with 100% attention, Listening beyond words, Listening encouraging the other to go deeper. The TECA questionnaire of cognitive and affective empathy evaluates: Adoption of perspectives, Emotional Comprehension, Emphasizing stress, Empathic joy. The EHS questionnaire Social Skills Scale: Self-expression in social situations, Defending one's own rights as a consumer, Expressing anger or dissatisfaction, Refusing to do and cutting interactions off, Making requests, Initiating positive interactions with the other sex. The ACDA questionnaire Assertiveness Assessment Scale evaluates self-assertiveness and heteroaservitivity. Applicability: To train these skills is so important for clinical practice of medical students and these capabilities that can be measured in a longitudinal way time. Ethical-legal aspects: The data were anonymous. The study was approved by the Ethics Committee. Results: The students of the Third year in the Degree of Medicine (34.4% Basque speakers and 65.6% Spanish speakers) with average age 20.93, (27.8% men and 72.2% women). There are no differences in social skills between men and women. The Basque speaker students of are more heteroactive (ACDA) than Spanish students. Active listening has a high correlation with social skills, especially with self-expression in social situations. Listening without interruption has a high correlation with self-expression in social situations and initiating positive interactions with the opposite sex. Adoption of perspectives presents a high correlation with auto- assertiveness. Emotional understanding presents a high correlation with positive interactions with the opposite sex. Empathic joy correlates with self-assertiveness, self-expression in social situations, and initiating positive interactions with the opposite sex.Keywords: active listening, assertiveness, communicational skills, empathy, students of medicine
Procedia PDF Downloads 303158 Transnational Solidarity and Philippine Society: A Probe on Trafficked Filipinos and Economic Inequality
Authors: Shierwin Agagen Cabunilas
Abstract:
Countless Filipinos are reeling in dire economic inequality while many others are victims of human trafficking. Where there is extreme economic inequality, majority of the Filipinos are deprived of basic needs to have a good life, i.e., decent shelter, safe environment, food, quality education, social security, etc. The problem on human trafficking poses a scandal and threat in respect to human rights and dignity of a person on matters of sex, gender, ethnicity and race among others. The economic inequality and trafficking in persons are social pathologies that needed considerable amount of attention and visible solution both in the national and international level. However, the Philippine government seems falls short in terms of goals to lessen, if not altogether eradicate, the dire fate of many Filipinos. The lack of solidarity among Filipinos seems to further aggravate injustice and create hindrances to economic equity and protection of Filipinos from syndicated crimes, i.e., human trafficking. Indifference towards the welfare and well-being of the Filipino people trashes them into an unending cycle of marginalization and neglect. A transnational solidaristic action in response to these concerns is imperative. The subsequent sections will first discuss the notion of solidarity and the motivating factors for collective action. While solidarity has been previously thought of as stemming from and for one’s own community and people, it can be argued as a value that defies borders. Solidarity bridges peoples of diverse societies and cultures. Although there are limits to international interventions on another’s sovereignty, such as, internal political autonomy, transnational solidarity may not be an opposition to solidarity with people suffering injustices. Governments, nations and institutions can work together in securing justice. Solidarity thus is a positive political action that can best respond to issues of economic, class, racial and gender injustices. This is followed by a critical analysis of some data on Philippine economic inequality and human trafficking and link the place of transnational solidaristic arrangements. Here, the present work is interested on the normative aspect of the problem. It begins with the section on economic inequality and subsequently, human trafficking. It is argued that a transnational solidarity is vital in assisting the Philippine governing bodies and authorities to seriously execute innovative economic policies and developmental programs that are justice and egalitarian oriented. Transnational solidarity impacts a corrective measure in the economic practices, and activities of the Philippine government. Moreover, it is suggested that in order to mitigate Philippine economic inequality and human trafficking concerns it involves a (a) historical analysis of systems that brought about economic anomalies, (b) renewed and innovated economic policies, (c) mutual trust and relatively high transparency, and (d) grass-root and context-based approach. In conclusion, the findings are briefly sketched and integrated in an optimistic view that transnational solidarity is capable of influencing Philippine governing bodies towards socio-economic transformation and development of the lives of Filipinos.Keywords: Philippines, Filipino, economic inequality, human trafficking, transnational solidarity
Procedia PDF Downloads 281157 The Impact of the Use of Some Multiple Intelligence-Based Teaching Strategies on Developing Moral Intelligence and Inferential Jurisprudential Thinking among Secondary School Female Students in Saudi Arabia
Authors: Sameerah A. Al-Hariri Al-Zahrani
Abstract:
The current study aims at getting acquainted with the impact of the use of some multiple intelligence-based teaching strategies on developing moral intelligence and inferential jurisprudential thinking among secondary school female students. The study has endeavored to answer the following questions: What is the impact of the use of some multiple intelligence-based teaching strategies on developing inferential jurisprudential thinking and moral intelligence among first-year secondary school female students? In the frame of this main research question, the study seeks to answer the following sub-questions: (i) What are the inferential jurisprudential thinking skills among first-year secondary school female students? (ii) What are the components of moral intelligence among first year secondary school female students? (iii) What is the impact of the use of some multiple intelligence‐based teaching strategies (such as the strategies of analyzing values, modeling, Socratic discussion, collaborative learning, peer collaboration, collective stories, building emotional moments, role play, one-minute observation) on moral intelligence among first-year secondary school female students? (iv) What is the impact of the use of some multiple intelligence‐based teaching strategies (such as the strategies of analyzing values, modeling, Socratic discussion, collaborative learning, peer collaboration, collective stories, building emotional moments, role play, one-minute observation) on developing the capacity for inferential jurisprudential thinking of juristic rules among first-year secondary school female students? The study has used the descriptive-analytical methodology in surveying, analyzing, and reviewing the literature on previous studies in order to benefit from them in building the tools of the study and the materials of experimental treatment. The study has also used the experimental method to study the impact of the independent variable (multiple intelligence strategies) on the two dependent variables (moral intelligence and inferential jurisprudential thinking) in first-year secondary school female students’ learning. The sample of the study is made up of 70 female students that have been divided into two groups: an experimental group consisting of 35 students who have been taught through multiple intelligence strategies, and a control group consisting of the other 35 students who have been taught normally. The two tools of the study (inferential jurisprudential thinking test and moral intelligence scale) have been implemented on the two groups as a pre-test. The female researcher taught the experimental group and implemented the two tools of the study. After the experiment, which lasted eight weeks, was over, the study showed the following results: (i) The existence of significant statistical differences (0.05) between the mean average of the control group and that of the experimental group in the inferential jurisprudential thinking test (recognition of the evidence of jurisprudential rule, recognition of the motive for the jurisprudential rule, jurisprudential inferencing, analogical jurisprudence) in favor of the experimental group. (ii) The existence of significant statistical differences (0.05) between the mean average of the control group and that of the experimental group in the components of the moral intelligence scale (sympathy, conscience, moral wisdom, tolerance, justice, respect) in favor of the experimental group. The study has, thus, demonstrated the impact of the use of some multiple intelligence-based teaching strategies on developing moral intelligence and inferential jurisprudential thinking.Keywords: moral intelligence, teaching, inferential jurisprudential thinking, secondary school
Procedia PDF Downloads 160156 Characterization of Surface Microstructures on Bio-Based PLA Fabricated with Nano-Imprint Lithography
Authors: D. Bikiaris, M. Nerantzaki, I. Koliakou, A. Francone, N. Kehagias
Abstract:
In the present study, the formation of structures in poly(lactic acid) (PLA) has been investigated with respect to producing areas of regular, superficial features with dimensions comparable to those of cells or biological macromolecules. Nanoimprint lithography, a method of pattern replication in polymers, has been used for the production of features ranging from tens of micrometers, covering areas up to 1 cm², down to hundreds of nanometers. Both micro- and nano-structures were faithfully replicated. Potentially, PLA has wide uses within biomedical fields, from implantable medical devices, including screws and pins, to membrane applications, such as wound covers, and even as an injectable polymer for, for example, lipoatrophy. The possibility of fabricating structured PLA surfaces, with structures of the dimensions associated with cells or biological macro- molecules, is of interest in fields such as cellular engineering. Imprint-based technologies have demonstrated the ability to selectively imprint polymer films over large areas resulting in 3D imprints over flat, curved or pre-patterned surfaces. Here, we compare nano-patterned with nano-patterned by nanoimprint lithography (NIL) PLA film. A silicon nanostructured stamp (provided by Nanotypos company) having positive and negative protrusions was used to pattern PLA films by means of thermal NIL. The polymer film was heated from 40°C to 60°C above its Tg and embossed with a pressure of 60 bars for 3 min. The stamp and substrate were demolded at room temperature. Scanning electron microscope (SEM) images showed good replication fidelity of the replicated Si stamp. Contact-angle measurements suggested that positive microstructuring of the polymer (where features protrude from the polymer surface) produced a more hydrophilic surface than negative micro-structuring. The ability to structure the surface of the poly(lactic acid), allied to the polymer’s post-processing transparency and proven biocompatibility. Films produced in this were also shown to enhance the aligned attachment behavior and proliferation of Wharton’s Jelly Mesenchymal Stem cells, leading to the observed growth contact guidance. The bacterial attachment patterns of some bacteria, highlighted that the nano-patterned PLA structure can reduce the propensity for the bacteria to attach to the surface, with a greater bactericidal being demonstrated activity against the Staphylococcus aureus cells. These biocompatible, micro- and nanopatterned PLA surfaces could be useful for polymer– cell interaction experiments at dimensions at, or below, that of individual cells. Indeed, post-fabrication modification of the microstructured PLA surface, with materials such as collagen (which can further reduce the hydrophobicity of the surface), will extend the range of applications, possibly through the use of PLA’s inherent biodegradability. Further study is being undertaken to examine whether these structures promote cell growth on the polymer surface.Keywords: poly(lactic acid), nano-imprint lithography, anti-bacterial properties, PLA
Procedia PDF Downloads 331155 Reading Comprehension in Profound Deaf Readers
Authors: S. Raghibdoust, E. Kamari
Abstract:
Research show that reduced functional hearing has a detrimental influence on the ability of an individual to establish proper phonological representations of words, since the phonological representations are claimed to mediate the conceptual processing of written words. Word processing efficiency is expected to decrease with a decrease in functional hearing. In other words, it is predicted that hearing individuals would be more capable of word processing than individuals with hearing loss, as their functional hearing works normally. Studies also demonstrate that the quality of the functional hearing affects reading comprehension via its effect on their word processing skills. In other words, better hearing facilitates the development of phonological knowledge, and can promote enhanced strategies for the recognition of written words, which in turn positively affect higher-order processes underlying reading comprehension. The aims of this study were to investigate and compare the effect of deafness on the participants’ abilities to process written words at the lexical and sentence levels through using two online and one offline reading comprehension tests. The performance of a group of 8 deaf male students (ages 8-12) was compared with that of a control group of normal hearing male students. All the participants had normal IQ and visual status, and came from an average socioeconomic background. None were diagnosed with a particular learning or motor disability. The language spoken in the homes of all participants was Persian. Two tests of word processing were developed and presented to the participants using OpenSesame software, in order to measure the speed and accuracy of their performance at the two perceptual and conceptual levels. In the third offline test of reading comprehension which comprised of semantically plausible and semantically implausible subject relative clauses, the participants had to select the correct answer out of two choices. The data derived from the statistical analysis using SPSS software indicated that hearing and deaf participants had a similar word processing performance both in terms of speed and accuracy of their responses. The results also showed that there was no significant difference between the performance of the deaf and hearing participants in comprehending semantically plausible sentences (p > 0/05). However, a significant difference between the performances of the two groups was observed with respect to their comprehension of semantically implausible sentences (p < 0/05). In sum, the findings revealed that the seriously impoverished sentence reading ability characterizing the profound deaf subjects of the present research, exhibited their reliance on reading strategies that are based on insufficient or deviant structural knowledge, in particular in processing semantically implausible sentences, rather than a failure to efficiently process written words at the lexical level. This conclusion, of course, does not mean to say that deaf individuals may never experience deficits at the word processing level, deficits that impede their understanding of written texts. However, as stated in previous researches, it sounds reasonable to assume that the more deaf individuals get familiar with written words, the better they can recognize them, despite having a profound phonological weakness.Keywords: deafness, reading comprehension, reading strategy, word processing, subject and object relative sentences
Procedia PDF Downloads 339154 Linear Evolution of Compressible Görtler Vortices Subject to Free-Stream Vortical Disturbances
Authors: Samuele Viaro, Pierre Ricco
Abstract:
Görtler instabilities generate in boundary layers from an unbalance between pressure and centrifugal forces caused by concave surfaces. Their spatial streamwise evolution influences transition to turbulence. It is therefore important to understand even the early stages where perturbations, still small, grow linearly and could be controlled more easily. This work presents a rigorous theoretical framework for compressible flows using the linearized unsteady boundary region equations, where only the streamwise pressure gradient and streamwise diffusion terms are neglected from the full governing equations of fluid motion. Boundary and initial conditions are imposed through an asymptotic analysis in order to account for the interaction of the boundary layer with free-stream turbulence. The resulting parabolic system is discretize with a second-order finite difference scheme. Realistic flow parameters are chosen from wind tunnel studies performed at supersonic and subsonic conditions. The Mach number ranges from 0.5 to 8, with two different radii of curvature, 5 m and 10 m, frequencies up to 2000 Hz, and vortex spanwise wavelengths from 5 mm to 20 mm. The evolution of the perturbation flow is shown through velocity, temperature, pressure profiles relatively close to the leading edge, where non-linear effects can still be neglected, and growth rate. Results show that a global stabilizing effect exists with the increase of Mach number, frequency, spanwise wavenumber and radius of curvature. In particular, at high Mach numbers curvature effects are less pronounced and thermal streaks become stronger than velocity streaks. This increase of temperature perturbations saturates at approximately Mach 4 flows, and is limited in the early stage of growth, near the leading edge. In general, Görtler vortices evolve closer to the surface with respect to a flat plate scenario but their location shifts toward the edge of the boundary layer as the Mach number increases. In fact, a jet-like behavior appears for steady vortices having small spanwise wavelengths (less than 10 mm) at Mach 8, creating a region of unperturbed flow close to the wall. A similar response is also found at the highest frequency considered for a Mach 3 flow. Larger vortices are found to have a higher growth rate but are less influenced by the Mach number. An eigenvalue approach is also employed to study the amplification of the perturbations sufficiently downstream from the leading edge. These eigenvalue results are compared with the ones obtained through the initial value approach with inhomogeneous free-stream boundary conditions. All of the parameters here studied have a significant influence on the evolution of the instabilities for the Görtler problem which is indeed highly dependent on initial conditions.Keywords: compressible boundary layers, Görtler instabilities, receptivity, turbulence transition
Procedia PDF Downloads 254153 Multiscale Modelization of Multilayered Bi-Dimensional Soils
Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur
Abstract:
Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets
Procedia PDF Downloads 125152 Performance Evaluation of Various Displaced Left Turn Intersection Designs
Authors: Hatem Abou-Senna, Essam Radwan
Abstract:
With increasing traffic and limited resources, accommodating left-turning traffic has been a challenge for traffic engineers as they seek balance between intersection capacity and safety; these are two conflicting goals in the operation of a signalized intersection that are mitigated through signal phasing techniques. Hence, to increase the left-turn capacity and reduce the delay at the intersections, the Florida Department of Transportation (FDOT) moves forward with a vision of optimizing intersection control using innovative intersection designs through the Transportation Systems Management & Operations (TSM&O) program. These alternative designs successfully eliminate the left-turn phase, which otherwise reduces the conventional intersection’s (CI) efficiency considerably, and divide the intersection into smaller networks that would operate in a one-way fashion. This study focused on the Crossover Displaced Left-turn intersections (XDL), also known as Continuous Flow Intersections (CFI). The XDL concept is best suited for intersections with moderate to high overall traffic volumes, especially those with very high or unbalanced left turn volumes. There is little guidance on determining whether partial XDL intersections are adequate to mitigate the overall intersection condition or full XDL is always required. The primary objective of this paper was to evaluate the overall intersection performance in the case of different partial XDL designs compared to a full XDL. The XDL alternative was investigated for 4 different scenarios; partial XDL on the east-west approaches, partial XDL on the north-south approaches, partial XDL on the north and east approaches and full XDL on all 4 approaches. Also, the impact of increasing volume on the intersection performance was considered by modeling the unbalanced volumes with 10% increment resulting in 5 different traffic scenarios. The study intersection, located in Orlando Florida, is experiencing recurring congestion in the PM peak hour and is operating near capacity with volume to a capacity ratio closer to 1.00 due to the presence of two heavy conflicting movements; southbound and westbound. The results showed that a partial EN XDL alternative proved to be effective and compared favorably to a full XDL alternative followed by the partial EW XDL alternative. The analysis also showed that Full, EW and EN XDL alternatives outperformed the NS XDL and the CI alternatives with respect to the throughput, delay and queue lengths. Significant throughput improvements were remarkable at the higher volume level with percent increase in capacity of 25%. The percent reduction in delay for the critical movements in the XDL scenarios compared to the CI scenario ranged from 30-45%. Similarly, queue lengths showed percent reduction in the XDL scenarios ranging from 25-40%. The analysis revealed how partial XDL design can improve the overall intersection performance at various demands, reduce the costs associated with full XDL and proved to outperform the conventional intersection. However, partial XDL serving low volumes or only one of the critical movements while other critical movements are operating near or above capacity do not provide significant benefits when compared to the conventional intersection.Keywords: continuous flow intersections, crossover displaced left-turn, microscopic traffic simulation, transportation system management and operations, VISSIM simulation model
Procedia PDF Downloads 311151 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 196150 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication
Authors: Farhan A. Alenizi
Abstract:
Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing
Procedia PDF Downloads 161149 Quantitative Comparisons of Different Approaches for Rotor Identification
Authors: Elizabeth M. Annoni, Elena G. Tolkacheva
Abstract:
Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia that is a known prognostic marker for stroke, heart failure and death. Reentrant mechanisms of rotor formation, which are stable electrical sources of cardiac excitation, are believed to cause AF. No existing commercial mapping systems have been demonstrated to consistently and accurately predict rotor locations outside of the pulmonary veins in patients with persistent AF. There is a clear need for robust spatio-temporal techniques that can consistently identify rotors using unique characteristics of the electrical recordings at the pivot point that can be applied to clinical intracardiac mapping. Recently, we have developed four new signal analysis approaches – Shannon entropy (SE), Kurtosis (Kt), multi-scale frequency (MSF), and multi-scale entropy (MSE) – to identify the pivot points of rotors. These proposed techniques utilize different cardiac signal characteristics (other than local activation) to uncover the intrinsic complexity of the electrical activity in the rotors, which are not taken into account in current mapping methods. We validated these techniques using high-resolution optical mapping experiments in which direct visualization and identification of rotors in ex-vivo Langendorff-perfused hearts were possible. Episodes of ventricular tachycardia (VT) were induced using burst pacing, and two examples of rotors were used showing 3-sec episodes of a single stationary rotor and figure-8 reentry with one rotor being stationary and one meandering. Movies were captured at a rate of 600 frames per second for 3 sec. with 64x64 pixel resolution. These optical mapping movies were used to evaluate the performance and robustness of SE, Kt, MSF and MSE techniques with respect to the following clinical limitations: different time of recordings, different spatial resolution, and the presence of meandering rotors. To quantitatively compare the results, SE, Kt, MSF and MSE techniques were compared to the “true” rotor(s) identified using the phase map. Accuracy was calculated for each approach as the duration of the time series and spatial resolution were reduced. The time series duration was decreased from its original length of 3 sec, down to 2, 1, and 0.5 sec. The spatial resolution of the original VT episodes was decreased from 64x64 pixels to 32x32, 16x16, and 8x8 pixels by uniformly removing pixels from the optical mapping video.. Our results demonstrate that Kt, MSF and MSE were able to accurately identify the pivot point of the rotor under all three clinical limitations. The MSE approach demonstrated the best overall performance, but Kt was the best in identifying the pivot point of the meandering rotor. Artifacts mildly affect the performance of Kt, MSF and MSE techniques, but had a strong negative impact of the performance of SE. The results of our study motivate further validation of SE, Kt, MSF and MSE techniques using intra-atrial electrograms from paroxysmal and persistent AF patients to see if these approaches can identify pivot points in a clinical setting. More accurate rotor localization could significantly increase the efficacy of catheter ablation to treat AF, resulting in a higher success rate for single procedures.Keywords: Atrial Fibrillation, Optical Mapping, Signal Processing, Rotors
Procedia PDF Downloads 324148 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping
Authors: Emily Rowe
Abstract:
Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables
Procedia PDF Downloads 153147 High Impact Biosratigraphic Study Of Amama-1 and Bara-1 Wells In Parts of Anambra Basin
Authors: J. O. Njoku, G. C. Soronnadi-ononiwu, E. J. Acrra, C. C. Agoha, T. C. Anyawu
Abstract:
The High Impact Biostratigrapgic Study of parts of Anambra basin was carried out using samples from two exploration wells (Amama-1 and Bara-1), Amama-1 (219M–1829M) and Bara-1 (317M-1594M). Palynological and Paleontological analyses were carried out on 100 ditch cutting samples. The faunal and floral succession were of terrestrial and marine origin as described and logged. The well penetrated four stratigraphic units in Anambra Basin (the Nkporo, Mamu, Ajali and Nsukka) the wells yielded well preserved formanifera and palynormorphs. The well yielded 53 species of foram and 69 species of palynomorphs, with 12 genera Bara-1 (25 Species of foram and 101 species of palynormorphs). Amama-1permitted the recognition of 21 genera with 31 formainiferal assemblage zones, 32 pollen and 37 spores assemblage zones, and dinoflagellate cyst, biozonation, ranging from late Campanian – early Paleocene. Bara-1 yielded (60 pollen, 41 spore assemblage zone and 18 dinoflagellate cyst).The zones, in stratigraphically ascending order for the foraminifera and palynomorphs are as follows. Amama Biozone A-Globotruncanella havanensis zone: Late Campanian –Maastrichtian (695 – 1829m) Biozone B-Morozovella velascoensis zone: Early Paleocene(165–695m) Bara-1 Biozone A-Globotruncanella havanensis zone: Late Campanian(1512m) Biozone B-Bolivina afra, B. explicate zone: Maastrichtian (634–1204m) Biozone C - Indeterminate (305 – 634m) palynomorphs Amama-1 A.Ctenolophonidites costatus zone:Early Maastrichtian (1829m) B-Retidiporites miniporatus Zone: Late Maastrichtian (1274m) Constructipollenites ineffectus Zone: Early Paleocene(695m) Bara-1 Droseridites senonicus Zone: Late Campanian (994– 1600m) B. Ctenolophonidites costatus Zone: Early Maastrichtian (713–994m) C. Retidiporites miniporatus Zone: Late Maastrichtian (305 –713m) The paleo – environment of deposition were determined to range from non-marine to outer netritic. A detailed categorization of the palynormorphs into terrestrially derived palynormorphs and marine derived palynormorphs based on the distribution of three broad vegetational types; mangrove, fresh water swamps and hintherland communities were used to evaluate sea level fluctuations with respect to sediments deposited in the basins and linked with a particular depositional system tract. Amama-1 recorded 4 maximum flooding surface(MFS) at depth 165-1829, dated b/w 61ma-76ma and three sequence boundary(SB) at depth1048m - 1533m and 1581 dated b/w 634m - 1387m, dated 69.5ma - 82ma and four sequence boundary(SB) at 552m-876m, dated 68ma-77.5ma respectively. The application of ecostratigraphic description is characterised by the prominent expansion of the hinterland component consisting of the Mangrove to Lowland Rainforest and Afromontane – Savannah vegetation.Keywords: formanifera, palynomorphs. campanian, maastritchian, ecostratigraphic, anambra
Procedia PDF Downloads 16146 Hydrocarbons and Diamondiferous Structures Formation in Different Depths of the Earth Crust
Authors: A. V. Harutyunyan
Abstract:
The investigation results of rocks at high pressures and temperatures have revealed the intervals of changes of seismic waves and density, as well as some processes taking place in rocks. In the serpentinized rocks, as a consequence of dehydration, abrupt changes in seismic waves and density have been recorded. Hydrogen-bearing components are released which combine with carbon-bearing components. As a result, hydrocarbons formed. The investigated samples are smelted. Then, geofluids and hydrocarbons migrate into the upper horizons of the Earth crust by the deep faults. Then their differentiation and accumulation in the jointed rocks of the faults and in the layers with collecting properties takes place. Under the majority of the hydrocarbon deposits, at a certain depth, magmatic centers and deep faults are recorded. The investigation results of the serpentinized rocks with numerous geological-geophysical factual data allow understanding that hydrocarbons are mainly formed in both the offshore part of the ocean and at different depths of the continental crust. Experiments have also shown that the dehydration of the serpentinized rocks is accompanied by an explosion with the instantaneous increase in pressure and temperature and smelting the studied rocks. According to numerous publications, hydrocarbons and diamonds are formed in the upper part of the mantle, at the depths of 200-400km, and as a consequence of geodynamic processes, they rise to the upper horizons of the Earth crust through narrow channels. However, the genesis of metamorphogenic diamonds and the diamonds found in the lava streams formed within the Earth crust, remains unclear. As at dehydration, super high pressures and temperatures arise. It is assumed that diamond crystals are formed from carbon containing components present in the dehydration zone. It can be assumed that besides the explosion at dehydration, secondary explosions of the released hydrogen take place. The process is naturally accompanied by seismic phenomena, causing earthquakes of different magnitudes on the surface. As for the diamondiferous kimberlites, it is well-known that the majority of them are located within the ancient shield and platforms not obligatorily connected with the deep faults. The kimberlites are formed at the shallow location of dehydrated masses in the Earth crust. Kimberlites are younger in respect of containing ancient rocks containing serpentinized bazites and ultrbazites of relicts of the paleooceanic crust. Sometimes, diamonds containing water and hydrocarbons showing their simultaneous genesis are found. So, the geofluids, hydrocarbons and diamonds, according to the new concept put forward, are formed simultaneously from serpentinized rocks as a consequence of their dehydration at different depths of the Earth crust. Based on the concept proposed by us, we suggest discussing the following: -Genesis of gigantic hydrocarbon deposits located in the offshore area of oceans (North American, Mexican Gulf, Cuanza-Kamerunian, East Brazilian etc.) as well as in the continental parts of different mainlands (Kanadian-Arctic Caspian, East Siberian etc.) - Genesis of metamorphogenic diamonds and diamonds in the lava streams (Guinea-Liberian, Kokchetav, Kanadian, Kamchatka-Tolbachinian, etc.).Keywords: dehydration, diamonds, hydrocarbons, serpentinites
Procedia PDF Downloads 341145 Effect of Rolling Shear Modulus and Geometric Make up on the Out-Of-Plane Bending Performance of Cross-Laminated Timber Panel
Authors: Md Tanvir Rahman, Mahbube Subhani, Mahmud Ashraf, Paul Kremer
Abstract:
Cross-laminated timber (CLT) is made from layers of timber boards orthogonally oriented in the thickness direction, and due to this, CLT can withstand bi-axial bending in contrast with most other engineered wood products such as laminated veneer lumber (LVL) and glued laminated timber (GLT). Wood is cylindrically anisotropic in nature and is characterized by significantly lower elastic modulus and shear modulus in the planes perpendicular to the fibre direction, and is therefore classified as orthotropic material and is thus characterized by 9 elastic constants which are three elastic modulus in longitudinal direction, tangential direction and radial direction, three shear modulus in longitudinal tangential plane, longitudinal radial plane and radial tangential plane and three Poisson’s ratio. For simplification, timber materials are generally assumed to be transversely isotropic, reducing the number of elastic properties characterizing it to 5, where the longitudinal plane and radial planes are assumed to be planes of symmetry. The validity of this assumption was investigated through numerical modelling of CLT with both orthotropic mechanical properties and transversely isotropic material properties for three softwood species, which are Norway spruce, Douglas fir, Radiata pine, and three hardwood species, namely Victorian ash, Beech wood, and Aspen subjected to uniformly distributed loading under simply supported boundary condition. It was concluded that assuming the timber to be transversely isotropic results in a negligible error in the order of 1 percent. It was also observed that along with longitudinal elastic modulus, ratio of longitudinal shear modulus (GL) and rolling shear modulus (GR) has a significant effect on a deflection for CLT panels of lower span to depth ratio. For softwoods such as Norway spruce and Radiata pine, the ratio of longitudinal shear modulus, GL to rolling shear modulus GR is reported to be in the order of 12 to 15 times in literature. This results in shear flexibility in transverse layers leading to increased deflection under out-of-plane loading. The rolling shear modulus of hardwoods has been found to be significantly higher than those of softwoods, where the ratio between longitudinal shear modulus to rolling shear modulus as low as 4. This has resulted in a significant rise in research into the manufacturing of CLT from entirely from hardwood, as well as from a combination of softwood and hardwoods. The commonly used beam theory to analyze the performance of CLT panels under out-of-plane loads are the Shear analogy method, Gamma method, and k-method. The shear analogy method has been found to be the most effective method where shear deformation is significant. The effect of the ratio of longitudinal shear modulus and rolling shear modulus of cross-layer on the deflection of CLT under uniformly distributed load with respect to its length to depth ratio was investigated using shear analogy method. It was observed that shear deflection is reduced significantly as the ratio of the shear modulus of the longitudinal layer and rolling shear modulus of cross-layer decreases. This indicates that there is significant room for improvement of the bending performance of CLT through developing hybrid CLT from a mix of softwood and hardwood.Keywords: rolling shear modulus, shear deflection, ratio of shear modulus and rolling shear modulus, timber
Procedia PDF Downloads 128144 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions
Authors: Vikrant Gupta, Amrit Goswami
Abstract:
The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition
Procedia PDF Downloads 137143 The Importance and Necessity for Acquiring Pedagogical Skills by the Practice Tutors for the Training of the General Nurses
Authors: Maria Luiza Fulga, Georgeta Truca, Mihaela Alexandru, Andriescu Mariana, Crin Marcean
Abstract:
The significance of nursing as a subject in the post-secondary healthcare curriculum is a major. We aimed to enable our students to assess the patient's risk, to establish prevention measures and to adapt to a specific learning context, in order to acquire the skills and abilities necessary for the nursing profession. In order to achieve these objectives, during the three years of study, teachers put an emphasis on acquiring communication skills, because in our country after the first cycle of hospital accreditation concluded in 2016, the National Authority for Quality of Health Management has introduced the criteria for the implementation and application of the nursing process according to the accreditation standards. According to these requirements, the nurse has to carry out the nursing assessment, based on communication as a distinct component, so that they can identify nursing diagnoses and implement the nursing plan. In this respect, we, the teachers, have refocused, by approaching various teaching strategies and preparing students for the real context of learning and applying what they learn. In the educational process, the tutors in the hospitals have an important role to play in acquiring professional skills. Students perform their activity in the hospital in accordance with the curriculum, in order to verify the practical applicability of the theoretical knowledge acquired in the school classes and also have the opportunity to acquire their skills in a real learning context. In clinical education, the student nurse learns in the middle of a guidance team which includes a practice tutor, who is a nurse that takes responsibility for the practical/clinical learning of the students in their field of activity. In achieving this objective, the tutor's abilities involve pedagogical knowledge, knowledge for the good of the individual and nursing theory, in order to be able to guide clinical practice in accordance with current requirements. The aim of this study is to find out the students’ confidence level in practice tutors in hospitals, the students’ degree of satisfaction in the pedagogical skills of the tutors and the practical applicability of the theoretical knowledge. In this study, we used as a method of investigation a student satisfaction questionnaire regarding the clinical practice in the hospital and the sample of the survey consisted of 100 students aged between 20 and 50 years, from the first, second and third year groups, with the General Nurse specialty (nurses responsible for general care), from 'Fundeni' Healthcare Post-Secondary School, Bucharest, Romania. Following the analysis of the data provided, we arrived the conclusion that the hospital tutor needs to improve his/her pedagogical skills, the knowledge of nursing diagnostics, and the implementation of the nursing plan, so that the applicability of the theoretical notions would be increased. Future plans include the pedagogical training of the medical staff, as well as updating the knowledge needed to implement the nursing process in order to meet current requirements.Keywords: clinical training, nursing process, pedagogical skills, tutor
Procedia PDF Downloads 162142 Redefining Doctors' Role in Terms of Medical Errors and Consumer Protection Act to Be in Line with Medical Ethics
Authors: Manushi Srivastava
Abstract:
Introduction: Doctor’s role, and relation with respect to patient care is at the core of medical ethics. The rapid pace of medical advances along with increasing consumer awareness about their rights and hike in cost of effective health care demand a robust, transparent and patient-friendly medical care system. However, doctors’ role performance is still in the frame of activity-passivity model of Doctor-Patient Relationship (DPR) where doctors act as parent and use to instruct their patients, without their consensus that is not going to help in the 21st century. Thus the current situation is a new challenge for traditional doctor-patient relationship after the introduction of Consumer Protection Act (CPA) in medical profession and the same is evidenced by increasing cases of medical litigation. To strengthen this system of medical services, the doctor plays a vital role, and the same should be reviewed in the present context. Objective: To understand the opinion of consultants regarding medical negligence and effect of Consumer Protection Act in terms of current practices of patient care. Method: This is a cross-sectional study in which both quantitative and qualitative methods are applied. Total 69 consultants were selected from multi-specialty hospitals of densely populated Varanasi city catering a population of about 1.8 million. Two-stage sampling was used for selection of respondents. At the first stage, selection of major wards (Medicine, Surgery, Ophthalmology, Gynaecology, Orthopaedics, and Paediatrics) was carried out, which are more susceptible to medical negligence. At the second stage, selection of consultants from the respective wards was carried out. In-depth Interviews were conducted with the help of semi-structured schedule. Two case studies of medical negligence were also carried out as part of the qualitative study. Analysis: Data were analyzed with the help of SPSS software (21.0 trial version). Semi-structured research tool was used to know consultant’s opinion about the pattern of medical negligence cases, litigations and claims made by patient community and inclusion of government medical services in CPA. Statistical analysis was done to describe data, and non-parametric test was used to observe the association between the variables. Analysis of Verbatim was used in case-study. Findings and Conclusion: Majority (92.8%) of consultants felt changes in the behaviour of community (patient) after implementation of CPA, as it had increased awareness about their rights. Less than half of the consultants opined that Medical Negligence is an Unintentional act of doctors and generally occurs due to communication gap and behavioural problem between doctor and patients. Experienced consultants ( > 10 years) pointed out that unethical practice by doctors and mal-intention of patient to harass doctors were additional reasons of Medical Negligence. In-depth interview revealed that now patients’ community expects more transparency and hence they demand cafeteria approach in diagnosis and management of cases. Thus as study results, we propose ‘Agreement Model’ of DPR to re-ensure ethical practice in medical profession.Keywords: doctors, communication, consumer protection act (CPA), medical error
Procedia PDF Downloads 159141 Learning from the Positive to Encourage Compliance with Workplace Health and Safety
Authors: Amy Williamson, Kerry Armstrong, Jason Edwards, Patricia Obst
Abstract:
Australian national policy endorses a responsive approach to work health and safety (WHS) regulation, combining positive motivators (education and guidance), with compliance monitoring and enforcement to encourage and secure compliance with legislation. Despite theoretical support for responsive regulation, there is limited evidence regarding how to achieve best results in practice. Using positive psychology as a novel paradigm, this study aims to investigate how non-punitive regulatory interactions can be improved to further encourage regulatory compliance in the construction industry. As part of a larger project, semi-structured interviews were conducted with 35 inspectorate staff and 11 managers in the Australian (Queensland) construction industry. Using an inductive, grounded approach, an in-depth qualitative investigation was conducted to identify the positive psychological principles which underpin effective use of the non-punitive aspects of responsive regulation. Results highlighted the importance of effective engagement between inspectors and industry managers. This involved the need to interact cooperatively and encourage compliance with WHS legislation. Several strategies were identified that assisted regulatory interactions and the ability of inspectors to engage. The importance of communication and interpersonal skills was reported to be critical to any interaction, regardless of the nature of the visit and regulatory tools used. In particular, the use of clear and open communication fostered trust and rapport which facilitated more positive interactions. The importance of respect and empathy was also highlighted. The need for provision of guidance and direction on how to achieve compliance was also reported. This related to ensuring companies understand their WHS obligations, providing specific advice regarding how to rectify a breach and meet compliance requirements, and ensuring sufficient follow up to confirm that compliance is successfully achieved. In the absence of imminent risk, allowing companies the opportunity to comply before further action is taken was also highlighted. Increased proactive engagement with industry to educate and promote the vision of safety at work was also reported. Finally, provision of praise and positive feedback was reported to assist interactions and encourage the continuation of good practices. Evidence from positive psychology and organisational psychology was obtained to support the use of each strategy in practice. In particular, the area of positive leadership provided a useful framework to consider the factors and conditions that drive positive interactions within the context of work health and safety and the specific relationship between inspectors and industry managers. This study provides fresh insight into key psychological principles which support non-punitive regulatory interactions in the area of workplace health and safety. The findings of this research contribute to a better understanding of how inspectors can enhance the efficacy of their regulatory interactions to improve compliance with legislation. Encouraging and assisting compliance through effective non-punitive activity offers a sustainable pathway for promoting safety and preventing fatalities and injuries in the construction industry.Keywords: engagement, non-punitive approaches to compliance, positive interactions in the workplace, work health and safety compliance
Procedia PDF Downloads 152140 Effect of Organics on Radionuclide Partitioning in Nuclear Fuel Storage Ponds
Authors: Hollie Ashworth, Sarah Heath, Nick Bryan, Liam Abrahamsen, Simon Kellet
Abstract:
Sellafield has a number of fuel storage ponds, some of which have been open to the air for a number of decades. This has caused corrosion of the fuel resulting in a release of some activity into solution, reduced water clarity, and accumulation of sludge at the bottom of the pond consisting of brucite (Mg(OH)2) and other uranium corrosion products. Both of these phases are also present as colloidal material. 90Sr and 137Cs are known to constitute a small volume of the radionuclides present in the pond, but a large fraction of the activity, thus they are most at risk of challenging effluent discharge limits. Organic molecules are known to be present also, due to the ponds being open to the air, with occasional algal blooms restricting visibility further. The contents of the pond need to be retrieved and safely stored, but dealing with such a complex, undefined inventory poses a unique challenge. This work aims to determine and understand the sorption-desorption interactions of 90Sr and 137Cs to brucite and uranium phases, with and without the presence of organic molecules from chemical degradation and bio-organisms. The influence of organics on these interactions has not been widely studied. Partitioning of these radionuclides and organic molecules has been determined through LSC, ICP-AES/MS, and UV-vis spectrophotometry coupled with ultrafiltration in both binary and ternary systems. Further detailed analysis into the surface and bonding environment of these components is being investigated through XAS techniques and PHREEQC modelling. Experiments were conducted in CO2-free or N2 atmosphere across a high pH range in order to best simulate conditions in the pond. Humic acid used in brucite systems demonstrated strong competition against 90Sr for the brucite surface regardless of the order of addition of components. Variance of pH did have a small effect, however this range (10.5-11.5) is close to the pHpzc of brucite, causing the surface to buffer the solution pH towards that value over the course of the experiment. Sorption of 90Sr to UO2 obeyed Ho’s rate equation and demonstrated a slow second-order reaction with respect to the sharing of valence electrons from the strontium atom, with the initial rate clearly dependent on pH, with the equilibrium concentration calculated at close to 100% sorption. There was no influence of humic acid seen when introduced to these systems. Sorption of 137Cs to UO3 was significant, with more than 95% sorbed in just over 24 hours. Again, humic acid showed no influence when introduced into this system. Both brucite and uranium based systems will be studied with the incorporation of cyanobacterial cultures harvested at different stages of growth. Investigation of these systems provides insight into, and understanding of, the effect of organics on radionuclide partitioning to brucite and uranium phases at high pH. The majority of sorption-desorption work for radionuclides has been conducted at neutral to acidic pH values, and mostly without organics. These studies are particularly important for the characterisation of legacy wastes at Sellafield, with a view to their safe retrieval and storage.Keywords: caesium, legacy wastes, organics, sorption-desorption, strontium, uranium
Procedia PDF Downloads 283139 High Impact Ecostratigraphic and Biostratigrapgic Study of Amama-1 and Bara-1 Wells in Parts of Anambra Basin
Authors: J. O. Njoku, G. C. Soronnadi-Ononiwu, E. J. Acrra, C. C. Agoha, T. C. Anyawu
Abstract:
The high impact ecostratigraphic and biostratigrapgic study of parts of Anambra basin was carried out using samples from two exploration wells (Amama-1 and Bara-1), Amama-1 (219M–1829M) and Bara-1 (317M-1594M). Palynological and paleontological analyses were carried out on 100 ditch-cutting samples. The faunal and floral succession were of terrestrial and marine origin as described and logged. The well penetrated four stratigraphic units in Anambra Basin (the Nkporo, Mamu, Ajali and Nsukka) the wells yielded well preserved formanifera and palynormorphs. The well yielded 53 species of foram and 69 species of palynomorphs, with 12 genera Bara-1 (25 Species of foram and 101 species of palynormorphs). Amama-1 permitted the recognition of 21 genera with 31 formainiferal assemblage zones, 32 pollen and 37 spores assemblage zones, and dinoflagellate cyst, biozonation, ranging from late Campanian – early Paleocene. Bara-1 yielded (60 pollen, 41 spore assemblage zone and 18 dinoflagellate cyst). The zones, in stratigraphically ascending order for the foraminifera and palynomorphs are as follows: Amama Biozone A-Globotruncanella havanensis zone: Late Campanian –Maastrichtian (695 – 1829m) Biozone B-Morozovella velascoensis zone: Early Paleocene(165–695m) Bara-1 Biozone A-Globotruncanella havanensis zone: Late Campanian(1512m) Biozone B-Bolivina afra, B. explicate zone: Maastrichtian (634–1204m) Biozone C - Indeterminate (305 – 634m) palynomorphs Amama-1 A. Ctenolophonidites costatus zone: Early Maastrichtian (1829m) B-Retidiporites miniporatus Zone: Late Maastrichtian (1274m) Constructipollenites ineffectus Zone: Early Paleocene(695m) Bara-1 Droseridites senonicus Zone: Late Campanian (994– 1600m) B. Ctenolophonidites costatus Zone: Early Maastrichtian (713–994m) C. Retidiporites miniporatus Zone: Late Maastrichtian (305 –713m) The paleo-environment of deposition were determined to range from non-marine to outer netritic. A detailed categorization of the palynormorphs into terrestrially derived palynormorphs and marine derived palynormorphs based on the distribution of three broad vegetational types; mangrove, fresh water swamps and hintherland communities were used to evaluate sea level fluctuations with respect to sediments deposited in the basins and linked with a particular depositional system tract. Amama-1 recorded 4 maximum flooding surface(MFS) at depth 165-1829, dated b/w 61ma-76ma and three sequence boundary(SB) at depth1048m - 1533m and 1581 dated b/w 634m - 1387m, dated 69.5ma - 82ma and four sequence boundary(SB) at 552m-876m, dated 68ma-77.5ma respectively. The application of ecostratigraphic description is characterised by the prominent expansion of the hinterland component consisting of the Mangrove to Lowland Rainforest and Afromontane – Savannah vegetation.Keywords: foraminifera, palynomorphs, Campanian, Maastritchian, ecostratigraphic, Anambra
Procedia PDF Downloads 28138 High Impact Ecostratigraphic and Biostratigrapgic Study of Amama-1 and Bara-1 Wells in Parts of Anambra Basin
Authors: J. O. Njoku, G. C. Soronnadi-ononiwu, E. J. Acrra, C. C. Agoha, T. C. Anyawu
Abstract:
The High Impact Ecostratigraphic And Biostratigrapgic Study of parts of Anambra basin was carried out using samples from two exploration wells (Amama-1 and Bara-1), Amama-1 (219M–1829M) and Bara-1 (317M-1594M). Palynological and Paleontological analyses were carried out on 100 ditch cutting samples. The faunal and floral succession were of terrestrial and marine origin as described and logged. The well penetrated four stratigraphic units in Anambra Basin (the Nkporo, Mamu, Ajali and Nsukka) the wells yielded well preserved formanifera and palynormorphs. The well yielded 53 species of foram and 69 species of palynomorphs, with 12 genera Bara-1 (25 Species of foram and 101 species of palynormorphs). Amama-1permitted the recognition of 21 genera with 31 formainiferal assemblage zones, 32 pollen and 37 spores assemblage zones, and dinoflagellate cyst, biozonation, ranging from late Campanian – early Paleocene. Bara-1 yielded (60 pollen, 41 spore assemblage zone and 18 dinoflagellate cyst).The zones, in stratigraphically ascending order for the foraminifera and palynomorphs are as follows. Amama Biozone A-Globotruncanella havanensis zone: Late Campanian –Maastrichtian (695 – 1829m) Biozone B-Morozovella velascoensis zone: Early Paleocene(165–695m) Bara-1 Biozone A-Globotruncanella havanensis zone: Late Campanian(1512m) Biozone B-Bolivina afra, B. explicate zone: Maastrichtian (634–1204m) Biozone C - Indeterminate (305 – 634m) palynomorphs Amama-1 A.Ctenolophonidites costatus zone:Early Maastrichtian (1829m) B-Retidiporites miniporatus Zone: Late Maastrichtian (1274m) Constructipollenites ineffectus Zone: Early Paleocene(695m) Bara-1 Droseridites senonicus Zone: Late Campanian (994– 1600m) B. Ctenolophonidites costatus Zone: Early Maastrichtian (713–994m) C. Retidiporites miniporatus Zone: Late Maastrichtian (305 –713m) The paleo – environment of deposition were determined to range from non-marine to outer netritic. A detailed categorization of the palynormorphs into terrestrially derived palynormorphs and marine derived palynormorphs based on the distribution of three broad vegetational types; mangrove, fresh water swamps and hintherland communities were used to evaluate sea level fluctuations with respect to sediments deposited in the basins and linked with a particular depositional system tract. Amama-1 recorded 4 maximum flooding surface(MFS) at depth 165-1829, dated b/w 61ma-76ma and three sequence boundary(SB) at depth1048m - 1533m and 1581 dated b/w 634m - 1387m, dated 69.5ma - 82ma and four sequence boundary(SB) at 552m-876m, dated 68ma-77.5ma respectively. The application of ecostratigraphic description is characterised by the prominent expansion of the hinterland component consisting of the Mangrove to Lowland Rainforest and Afromontane – Savannah vegetation.Keywords: formanifera, palynomorphs. Campanian, Maastritchian, Ecostratigraphic, Anambra
Procedia PDF Downloads 20137 The Influence of Mechanical and Physicochemical Characteristics of Perfume Microcapsules on Their Rupture Behaviour and How This Relates to Performance in Consumer Products
Authors: Andrew Gray, Zhibing Zhang
Abstract:
The ability for consumer products to deliver a sustained perfume response can be a key driver for a variety of applications. Many compounds in perfume oils are highly volatile, meaning they readily evaporate once the product is applied, and the longevity of the scent is poor. Perfume capsules have been introduced as a means of abating this evaporation once the product has been delivered. The impermeable capsules are aimed to be stable within the formulation, and remain intact during delivery to the desired substrate, only rupturing to release the core perfume oil through application of mechanical force applied by the consumer. This opens up the possibility of obtaining an olfactive response hours, weeks or even months after delivery, depending on the nature of the desired application. Tailoring the properties of the polymeric capsules to better address the needs of the application is not a trivial challenge and currently design of capsules is largely done by trial and error. The aim of this work is to have more predictive methods for capsule design depending on the consumer application. This means refining formulations such that they rupture at the right time for the specific consumer application, not too early, not too late. Finding the right balance between these extremes is essential if a benefit is sought with respect to neat addition of perfume to formulations. It is important to understand the forces that influence capsule rupture, first, by quantifying the magnitude of these different forces, and then by assessing bulk rupture in real-world applications to understand how capsules actually respond. Samples were provided by an industrial partner and the mechanical properties of individual capsules within the samples were characterized via a micromanipulation technique, developed by Professor Zhang at the University of Birmingham. The capsules were synthesized such as to change one particular physicochemical property at a time, such as core: wall material ratio, and the average size of capsules. Analysis of shell thickness via Transmission Electron Microscopy, size distribution via the use of a Mastersizer, as well as a variety of other techniques confirmed that only one particular physicochemical property was altered for each sample. The mechanical analysis was subsequently undertaken, showing the effect that changing certain capsule properties had on the response under compression. It was, however, important to link this fundamental mechanical response to capsule performance in real-world applications. As such, the capsule samples were introduced to a formulation and exposed to full scale stresses. GC-MS headspace analysis of the perfume oil released from broken capsules enabled quantification of what the relative strengths of capsules truly means for product performance. Correlations have been found between the mechanical strength of capsule samples and performance in terms of perfume release in consumer applications. Having a better understanding of the key parameters that drive performance benefits the design of future formulations by offering better guidelines on the parameters that can be adjusted without worrying about the performance effects, and singles out those parameters that are essential in finding the sweet spot for capsule performance.Keywords: consumer products, mechanical and physicochemical properties, perfume capsules, rupture behaviour
Procedia PDF Downloads 131136 New Findings on the Plasma Electrolytic Oxidation (PEO) of Aluminium
Authors: J. Martin, A. Nominé, T. Czerwiec, G. Henrion, T. Belmonte
Abstract:
The plasma electrolytic oxidation (PEO) is a particular electrochemical process to produce protective oxide ceramic coatings on light-weight metals (Al, Mg, Ti). When applied to aluminum alloys, the resulting PEO coating exhibit improved wear and corrosion resistance because thick, hard, compact and adherent crystalline alumina layers can be achieved. Several investigations have been carried out to improve the efficiency of the PEO process and one particular way consists in tuning the suitable electrical regime. Despite the considerable interest in this process, there is still no clear understanding of the underlying discharge mechanisms that make possible metal oxidation up to hundreds of µm through the ceramic layer. A key parameter that governs the PEO process is the numerous short-lived micro-discharges (micro-plasma in liquid) that occur continuously over the processed surface when the high applied voltage exceeds the critical dielectric breakdown value of the growing ceramic layer. By using a bipolar pulsed current to supply the electrodes, we previously observed that micro-discharges are delayed with respect to the rising edge of the anodic current. Nevertheless, explanation of the origin of such phenomena is still not clear and needs more systematic investigations. The aim of the present communication is to identify the relationship that exists between this delay and the mechanisms responsible of the oxide growth. For this purpose, the delay of micro-discharges ignition is investigated as the function of various electrical parameters such as the current density (J), the current pulse frequency (F) and the anodic to cathodic charge quantity ratio (R = Qp/Qn) delivered to the electrodes. The PEO process was conducted on Al2214 aluminum alloy substrates in a solution containing potassium hydroxide [KOH] and sodium silicate diluted in deionized water. The light emitted from micro-discharges was detected by a photomultiplier and the micro-discharge parameters (number, size, life-time) were measured during the process by means of ultra-fast video imaging (125 kfr./s). SEM observations and roughness measurements were performed to characterize the morphology of the elaborated oxide coatings while XRD was carried out to evaluate the amount of corundum -Al203 phase. Results show that whatever the applied current waveform, the delay of micro-discharge appearance increases as the process goes on. Moreover, the delay is shorter when the current density J (A/dm2), the current pulse frequency F (Hz) and the ratio of charge quantity R are high. It also appears that shorter delays are associated to stronger micro-discharges (localized, long and large micro-discharges) which have a detrimental effect on the elaborated oxide layers (thin and porous). On the basis of the results, a model for the growth of the PEO oxide layers will be presented and discussed. Experimental results support that a mechanism of electrical charge accumulation at the oxide surface / electrolyte interface takes place until the dielectric breakdown occurs and thus until micro-discharges appear.Keywords: aluminium, micro-discharges, oxidation mechanisms, plasma electrolytic oxidation
Procedia PDF Downloads 264135 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings
Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun
Abstract:
Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building
Procedia PDF Downloads 173