Search results for: TRIZ (Theory of Inventive Problem Solving)
804 Closed Mitral Valvotomy: A Safe and Promising Procedure
Authors: Sushil Kumar Singh, Kumar Rahul, Vivek Tewarson, Sarvesh Kumar, Shobhit Kumar
Abstract:
Objective: Rheumatic mitral stenosis continues to be a major public health problem in developing countries. When the left atrium (LA) is unable to fill the left ventricle (LV) at normal LA pressures due to impaired relaxation and impaired compliance, diastolic dysfunction occurs. The assessment of left ventricular (LV) diastolic function and filling pressures is of clinical importance to identify underlying cardiac disease, its treatment, and to assess prognosis. 2D echocardiography can detect diastolic dysfunction with excellent sensitivity and minimal risk when compared to the gold standard of invasive pressure-volume measurements. Material and Method: This was a one-year study consisting of twenty-nine patients of isolated rheumatic severe mitral stenosis. Data was analyzed preoperative and post operative (at one month follow-up). Transthoracic 2D echocardiographic parameters of the diastolic function are transmitral flow, pulmonary venous flow, mitral annular tissue doppler, and color M-mode doppler. In our study, mitral valve orifice area, ejection fraction, deceleration time, E/A-wave, E/E’-wave, myocardial performance index of left ventricle (Tei index ), and Mitral inflow propagation velocity were included for echocardiographic evaluation. The statistical analysis was performed on SPSS Version 15.0 statistical analysis software. Result: Twenty-nine patients underwent successful closed mitral commissurotomy for isolated mitral stenosis. The outcome measures were observed pre-operatively and at one-month follow-up. The majority of patients were in NYHA grade III (69.0%) in the preoperative period, which improved to NYHA grade I (48.3%) after closed mitral commissurotomy. Post-surgery mitral valve area increased from 0.77 ± 0.13 to 2.32 ± 0.26 cm, ejection fraction increased from 61.38 ± 4.61 to 64.79 ± 3.22. There was a decrease in deceleration time from 231.55 ± 49.31 to 168.28 ± 14.30 ms, E/A ratio from 1.70 ± 0.54 from 0.89 ± 0.39, E/E’ ratio from 14.59 ± 3.34 to 8.86 ± 3.03. In addition, there was improvement in TIE index from 0.50 ± 0.03 to 0.39 ± 0.06 and mitral inflow propagation velocity from 47.28 ± 3.71 to 57.86 ± 3.19 cm/sec. In peri-operative and follow-up, there was no incidence of severe mitral regurgitation (MR). There was no thromboembolic incident and no mortality.Keywords: closed mitral valvotomy, mitral stenosis, open mitral commissurotomy, balloon mitral valvotomy
Procedia PDF Downloads 85803 Fahr Dsease vs Fahr Syndrome in the Field of a Case Report
Authors: Angelis P. Barlampas
Abstract:
Objective: The confusion of terms is a common practice in many situations of the everyday life. But, in some circumstances, such as in medicine, the precise meaning of a word curries a critical role for the health of the patient. Fahr disease and Fahr syndrome are often falsely used interchangeably, but they are two different conditions with different physical histories of different etiology and different medical management. A case of the seldom Fahr disease is presented, and a comparison with the more common Fahr syndrome follows. Materials and method: A 72 years old patient came to the emergency department, complaining of some kind of non specific medal disturbances, like anxiety, difficulty of concentrating, and tremor. The problems had a long course, but he had the impression of getting worse lately, so he decided to check them. Past history and laboratory tests were unremarkable. Then, a computed tomography examination was ordered. Results: The CT exam showed bilateral, hyperattenuating areas of heavy, dense calcium type deposits in basal ganglia, striatum, pallidum, thalami, the dentate nucleus, and the cerebral white matter of frontal, parietal and iniac lobes, as well as small areas of the pons. Taking into account the absence of any known preexisting illness and the fact that the emergency laboratory tests were without findings, a hypothesis of the rare Fahr disease was supposed. The suspicion was confirmed with further, more specific tests, which showed the lack of any other conditions which could probably share the same radiological image. Differentiating between Fahr disease and Fahr syndrome. Fahr disease: Primarily autosomal dominant Symmetrical and bilateral intracranial calcifications The patient is healthy until the middle age Absence of biochemical abnormalities. Family history consistent with autosomal dominant Fahr syndrome :Earlier between 30 to 40 years old. Symmetrical and bilateral intracranial calcifications Endocrinopathies: Idiopathic hypoparathyroidism, secondary hypoparathyroidism, hyperparathyroidism, pseudohypoparathyroidism ,pseudopseudohypoparathyroidism, e.t.c The disease appears at any age There are abnormal laboratory or imaging findings. Conclusion: Fahr disease and Fahr syndrome are not the same illness, although this is not well known to the inexperienced doctors. As clinical radiologists, we have to inform our colleagues that a radiological image, along with the patient's history, probably implies a rare condition and not something more usual and prompt the investigation to the right route. In our case, a genetic test could be done earlier and reveal the problem, and thus avoiding unnecessary and specific tests which cost in time and are uncomfortable to the patient.Keywords: fahr disease, fahr syndrome, CT, brain calcifications
Procedia PDF Downloads 62802 Reinventing Business Education: Filling the Knowledge Gap on the Verge of the 4th Industrial Revolution
Authors: Elena Perepelova
Abstract:
As the world approaches the 4th industrial revolution, income inequality has become one of the major societal concerns. Displacement of workers by technology becomes a reality, and in return, new skills and competencies are required. More important than ever, education needs to help individuals understand the wider world around them and make global connections. The author argues for the necessity to incorporate business, economics and finance studies as a part of primary education and offer access to business education to the general population with the primary objective to understand how the world functions. The paper offers a fresh look at existing business theory through an innovative program called 'Usefulnomics'. Realizing that the subject of Economics, Finance and Business are perceived as overwhelming for a large part of the population, the author has taken a holistic approach and created a program that simplifies the definitions of the existing concepts and shifts from the traditional breakdown into subjects and specialties to a teaching method that is based exclusively on real-life example case studies and group debates, in order to better grasp the concepts and put them into context. The paper findings are the result of a two-year project and experimental work with students from UK, USA, Malaysia, Russia, and Spain. The author conducted extensive research through on-line and in-person classes and workshops as well as in-depth interviews of primary and secondary grade students to assess their understanding of what is a business, how businesses operate and the role businesses play in their communities. The findings clearly indicate that students of all ages often understood business concepts and processes only in an intuitive way, which resulted in misconceptions and gaps in knowledge. While knowledge gaps were easier to identify and correct in primary school students, as students’ age increased, the learning process became distorted by career choices, political views, and the students’ actual (or perceived) economic status. While secondary school students recognized more concepts, their real understanding was often on par with upper primary school age students. The research has also shown that lack of correct vocabulary created a strong barrier to communication and real-life application or further learning. Based on these findings, each key business concept was practiced and put into context with small groups of students in order to design the content and format which would be well accepted and understood by the target group. As a result, the final learning program package was based on case studies from daily modern life and used a wide range of examples: from popular brands and well-known companies to basic commodities. In the final stage, the content and format were put into practice in larger classrooms. The author would like to share the key findings from the research, the resulting learning program as well as present new ideas on how the program could be further enriched and adapted so schools and organizations can deliver it.Keywords: business, finance, economics, lifelong learning, XXI century skills
Procedia PDF Downloads 118801 Modeling Curriculum for High School Students to Learn about Electric Circuits
Authors: Meng-Fei Cheng, Wei-Lun Chen, Han-Chang Ma, Chi-Che Tsai
Abstract:
Recent K–12 Taiwan Science Education Curriculum Guideline emphasize the essential role of modeling curriculum in science learning; however, few modeling curricula have been designed and adopted in current science teaching. Therefore, this study aims to develop modeling curriculum on electric circuits to investigate any learning difficulties students have with modeling curriculum and further enhance modeling teaching. This study was conducted with 44 10th-grade students in Central Taiwan. Data collection included a students’ understanding of models in science (SUMS) survey that explored the students' epistemology of scientific models and modeling and a complex circuit problem to investigate the students’ modeling abilities. Data analysis included the following: (1) Paired sample t-tests were used to examine the improvement of students’ modeling abilities and conceptual understanding before and after the curriculum was taught. (2) Paired sample t-tests were also utilized to determine the students’ modeling abilities before and after the modeling activities, and a Pearson correlation was used to understand the relationship between students’ modeling abilities during the activities and on the posttest. (3) ANOVA analysis was used during different stages of the modeling curriculum to investigate the differences between the students’ who developed microscopic models and macroscopic models after the modeling curriculum was taught. (4) Independent sample t-tests were employed to determine whether the students who changed their models had significantly different understandings of scientific models than the students who did not change their models. The results revealed the following: (1) After the modeling curriculum was taught, the students had made significant progress in both their understanding of the science concept and their modeling abilities. In terms of science concepts, this modeling curriculum helped the students overcome the misconception that electric currents reduce after flowing through light bulbs. In terms of modeling abilities, this modeling curriculum helped students employ macroscopic or microscopic models to explain their observed phenomena. (2) Encouraging the students to explain scientific phenomena in different context prompts during the modeling process allowed them to convert their models to microscopic models, but it did not help them continuously employ microscopic models throughout the whole curriculum. The students finally consistently employed microscopic models when they had help visualizing the microscopic models. (3) During the modeling process, the students who revised their own models better understood that models can be changed than the students who did not revise their own models. Also, the students who revised their models to explain different scientific phenomena tended to regard models as explanatory tools. In short, this study explored different strategies to facilitate students’ modeling processes as well as their difficulties with the modeling process. The findings can be used to design and teach modeling curricula and help students enhance their modeling abilities.Keywords: electric circuits, modeling curriculum, science learning, scientific model
Procedia PDF Downloads 460800 System Devices to Reduce Particulate Matter Concentrations in Railway Metro Systems
Authors: Armando Cartenì
Abstract:
Within the design of sustainable transportation engineering, the problem of reducing particulate matter (PM) concentrations in railways metro system was not much discussed. It is well known that PM levels in railways metro system are mainly produced by mechanical friction at the rail-wheel-brake interactions and by the PM re-suspension caused by the turbulence generated by the train passage, which causes dangerous problems for passenger health. Starting from these considerations, the aim of this research was twofold: i) to investigate the particulate matter concentrations in a ‘traditional’ railways metro system; ii) to investigate the particulate matter concentrations of a ‘high quality’ metro system equipped with design devices useful for reducing PM concentrations: platform screen doors, rubber-tyred and an advanced ventilation system. Two measurement surveys were performed: one in the ‘traditional’ metro system of Naples (Italy) and onother in the ‘high quality’ rubber-tyred metro system of Turin (Italy). Experimental results regarding the ‘traditional’ metro system of Naples, show that the average PM10 concentrations measured in the underground station platforms are very high and range between 172 and 262 µg/m3 whilst the average PM2,5 concentrations range between 45 and 60 µg/m3, with dangerous problems for passenger health. By contrast the measurements results regarding the ‘high quality’ metro system of Turin show that: i) the average PM10 (PM2.5) concentrations measured in the underground station platform is 22.7 µg/m3 (16.0 µg/m3) with a standard deviation of 9.6 µg/m3 (7.6 µg/m3); ii) the indoor concentrations (both for PM10 and for PM2.5) are statistically lower from those measured in outdoors (with a ratio equal to 0.9-0.8), meaning that the indoor air quality is greater than those in urban ambient; iii) that PM concentrations in underground stations are correlated to the trains passage; iv) the inside trains concentrations (both for PM10 and for PM2.5) are statistically lower from those measured at station platform (with a ratio equal to 0.7-0.8), meaning that inside trains the use of air conditioning system could promote a greater circulation that clean the air. The comparison among the two case studies allow to conclude that the metro system designed with PM reduction devices allow to reduce PM concentration up to 11 times against a ‘traditional’ one. From these results, it is possible to conclude that PM concentrations measured in a ‘high quality’ metro system are significantly lower than the ones measured in a ‘traditional’ railway metro systems. This result allows possessing the bases for the design of useful devices for retrofitting metro systems all around the world.Keywords: air quality, pollutant emission, quality in public transport, underground railway, external cost reduction, transportation planning
Procedia PDF Downloads 210799 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance
Authors: George Zhou, Yunchan Chen, Candace Chien
Abstract:
Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning
Procedia PDF Downloads 87798 Detailed Analysis of Mechanism of Crude Oil and Surfactant Emulsion
Authors: Riddhiman Sherlekar, Umang Paladia, Rachit Desai, Yash Patel
Abstract:
A number of surfactants which exhibit ultra-low interfacial tension and an excellent microemulsion phase behavior with crude oils of low to medium gravity are not sufficiently soluble at optimum salinity to produce stable aqueous solutions. Such solutions often show phase separation after a few days at reservoir temperature, which does not suffice the purpose and the time is short when compared to the residence time in a reservoir for a surfactant flood. The addition of polymer often exacerbates the problem although the poor stability of the surfactant at high salinity remains a pivotal issue. Surfactants such as SDS, Ctab with large hydrophobes produce lowest IFT, but are often not sufficiently water soluble at desired salinity. Hydrophilic co-solvents and/or co-surfactants are needed to make the surfactant-polymer solution stable at the desired salinity. This study focuses on contrasting the effect of addition of a co-solvent in stability of a surfactant –oil emulsion. The idea is to use a co-surfactant to increase stability of an emulsion. Stability of the emulsion is enhanced because of creation of micro-emulsion which is verified both visually and with the help of particle size analyzer at varying concentration of salinity, surfactant and co-surfactant. A lab-experimental method description is provided and the method is described in detail to permit readers to emulate all results. The stability of the oil-water emulsion is visualized with respect to time, temperature, salinity of the brine and concentration of the surfactant. Nonionic surfactant TX-100 when used as a co-surfactant increases the stability of the oil-water emulsion. The stability of the prepared emulsion is checked by observing the particle size distribution. For stable emulsion in volume% vs particle size curve, the peak should be obtained for particle size of 5-50 nm while for the unstable emulsion a bigger sized particles are observed. The UV-Visible spectroscopy is also used to visualize the fraction of oil that plays important role in the formation of micelles in stable emulsion. This is important as the study will help us to decide applicability of the surfactant based EOR method for a reservoir that contains a specific type of crude. The use of nonionic surfactant as a co-surfactant would also increase the efficiency of surfactant EOR. With the decline in oil discoveries during the last decades it is believed that EOR technologies will play a key role to meet the energy demand in years to come. Taking this into consideration, the work focuses on the optimization of the secondary recovery(Water flooding) with the help of surfactant and/or co-surfactants by creating desired conditions in the reservoir.Keywords: co-surfactant, enhanced oil recovery, micro-emulsion, surfactant flooding
Procedia PDF Downloads 251797 Reducing the Impact of Pathogenic Fungi on Barley Using Bacteria: Bacterial Biocontrol in the Barley-Malt-Beer Industry
Authors: Eusèbe Gnonlonfoun, Xavier Framboisier, Michel Fick, Emmanuel Rondags
Abstract:
Pathogenic fungi represent a generic problem for cereals, including barley, as they can produce a number of thermostable toxic metabolites such as mycotoxins that contaminate plants and food products, leading to serious health issues for humans and animals and causing significant losses in global food production. In addition, mycotoxins represent a significant technological concern for the malting and brewing industries, as they may affect the quality and safety of raw materials (barley and malt) and final products (beer). Moreover, this situation is worsening due to the highly variable climatic conditions that favor microbial development and the societal desire to reduce the use of phytosanitary products, including fungicides. In this complex environmental, regulatory and economic context for the French barley-malt-beer industry, this project aims to develop an innovative biocontrol process by using technological bacteria, isolated from infection-resistant barley cultures, that are able to reduce the development of spoilage fungi and the associated mycotoxin production. The experimental approach consists of i) coculturing bacterial and pathogenic fungal strains in solid and liquid media to access the growth kinetics of these microorganisms and to evaluate the impact of these bacteria on fungal growth and mycotoxin production; then ii) the results will be used to carry out a micro-malting process in order to develop the aforementioned process, and iii) the technological and sanitary properties of the generated barley malts will finally be evaluated in order to validate the biocontrol process developed. The process is expected to make it possible to guarantee, with controlled costs, an irreproachable hygienic and technological quality of the malt, despite the increasingly complex and variable conditions for barley production. Thus, the results will not only make it possible to maintain the dominant world position of the French barley-malt chain but will also allow it to conquer emerging markets, mainly in Africa and Asia. The use of this process will also contribute to the reduction of the use of phytosanitary products in the field for barley production while reducing the level of contamination of malting plant effluents. Its environmental impact would therefore be significant, especially considering that barley is the fourth most-produced cereal in the world.Keywords: barley, pathogenic fungi, mycotoxins, malting, bacterial biocontrol
Procedia PDF Downloads 177796 The Interaction of Climate Change and Human Health in Italy
Authors: Vito Telesca, Giuseppina A. Giorgio, M. Ragosta
Abstract:
The effects of extreme heat events are increasing in recent years. Humans are forced to adjust themselves to adverse climatic conditions. The impact of weather on human health has become public health significance, especially in light of climate change and rising frequency of devasting weather events (e.g., heat waves and floods). The interest of scientific community is widely known. In particular, the associations between temperature and mortality are well studied. Weather conditions are natural factors that affect the human organism. Recent works show that the temperature threshold at which an impact is seen varies by geographic area and season. These results suggest heat warning criteria should consider local thresholds to account for acclimation to local climatology as well as the seasonal timing of a forecasted heat wave. Therefore, it is very important the problem called ‘local warming’. This is preventable with adequate warning tools and effective emergency planning. Since climate change has the potential to increase the frequency of these types of events, improved heat warning systems are urgently needed. This would require a better knowledge of the full impact of extreme heat on morbidity and mortality. The majority of researchers who analyze the associations between human health and weather variables, investigate the effect of air temperature and bioclimatic indices. These indices combine air temperature, relative humidity, and wind speed and are very important to determine the human thermal comfort. Health impact studies of weather events showed that the prevention is an essential element to dramatically reduce the impact of heat waves. The summer Italian of 2012 was characterized with high average temperatures (con un +2.3°C in reference to the period 1971-2000), enough to be considered as the second hottest summer since 1800. Italy was the first among countries in Europe which adopted tools for to predict these phenomena with 72 hours in advance (Heat Health Watch Warning System - HHWWS). Furthermore, in Italy heat alert criteria relies on the different Indexes, for example Apparent temperature, Scharlau index, Thermohygrometric Index, etc. This study examines the importance of developing public health policies that protect the most vulnerable people (such as the elderly) to extreme temperatures, highlighting the factors that confer susceptibility.Keywords: heat waves, Italy, local warming, temperature
Procedia PDF Downloads 243795 A Geometric Based Hybrid Approach for Facial Feature Localization
Authors: Priya Saha, Sourav Dey Roy Jr., Debotosh Bhattacharjee, Mita Nasipuri, Barin Kumar De, Mrinal Kanti Bhowmik
Abstract:
Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features.Keywords: biometrics, face recognition, facial landmarks, image processing
Procedia PDF Downloads 412794 Optimized Renewable Energy Mix for Energy Saving in Waste Water Treatment Plants
Authors: J. D. García Espinel, Paula Pérez Sánchez, Carlos Egea Ruiz, Carlos Lardín Mifsut, Andrés López-Aranguren Oliver
Abstract:
This paper shortly describes three main actuations over a Waste Water Treatment Plant (WWTP) for reducing its energy consumption: Optimization of the biological reactor in the aeration stage by including new control algorithms and introducing new efficient equipment, the installation of an innovative hybrid system with zero Grid injection (formed by 100kW of PV energy and 5 kW of mini-wind energy generation) and an intelligent management system for load consumption and energy generation control in the most optimum way. This project called RENEWAT, involved in the European Commission call LIFE 2013, has the main objective of reducing the energy consumptions through different actions on the processes which take place in a WWTP and introducing renewable energies on these treatment plants, with the purpose of promoting the usage of treated waste water for irrigation and decreasing the C02 gas emissions. WWTP is always required before waste water can be reused for irrigation or discharged in water bodies. However, the energetic demand of the treatment process is high enough for making the price of treated water to exceed the one for drinkable water. This makes any policy very difficult to encourage the re-use of treated water, with a great impact on the water cycle, particularly in those areas suffering hydric stress or deficiency. The cost of treating waste water involves another climate-change related burden: the energy necessary for the process is obtained mainly from the electric network, which is, in most of the cases in Europe, energy obtained from the burning of fossil fuels. The innovative part of this project is based on the implementation, adaptation and integration of solutions for this problem, together with a new concept of the integration of energy input and operative energy demand. Moreover, there is an important qualitative jump between the technologies used and the alleged technologies to use in the project which give it an innovative character, due to the fact that there are no similar previous experiences of a WWTP including an intelligent discrimination of energy sources, integrating renewable ones (PV and Wind) and the grid.Keywords: aeration system, biological reactor, CO2 emissions, energy efficiency, hybrid systems, LIFE 2013 call, process optimization, renewable energy sources, wasted water treatment plants
Procedia PDF Downloads 352793 Characterization of Aerosol Droplet in Absorption Columns to Avoid Amine Emissions
Authors: Hammad Majeed, Hanna Knuutila, Magne Hilestad, Hallvard Svendsen
Abstract:
Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem.Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles. Results: As an example a droplet of initial size of 3 microns, initially containing a 5M MEA, solution is exposed to an atmosphere free of MEA. Composition of the gas phase and temperature is changing with respect to time throughout the absorber.Keywords: amine solvents, emissions, global climate change, simulation and modelling, aerosol generation
Procedia PDF Downloads 265792 (Mis) Communication across the Borders: Politics, Media, and Public Opinion in Turkey
Authors: Banu Baybars Hawks
Abstract:
To date, academic attention in social sciences remains inadequate with regard to research and analysis of public opinion in Turkey. Most of the existing research has assessed the public opinion during political election periods. Therefore, it is of great interest to find out what the public thinks about current issues in Turkey, and how to interpret the results to be able to reveal whether they may have any reflections on social, political, and cultural structure of the country. Accordingly, the current study seeks to fill the gap in the social sciences literature in English regarding Turkey’s social and political stand which may be perceived to be very different by other nations. Without timely feedback from public surveys, various programs for improving different services and institutions functioning in the country might not achieve their expected goal, nor can decisions about which programs to implement be made rationally. Additionally, the information gathered may not only yield important insights into public’s opinion regarding current agenda in Turkey, but also into the correlates shaping public policies. Agenda-setting studies including agenda-building, agenda melding, reversed agenda-setting and information diffusion studies will be used to explain the roles of factors and actors in the formation of public opinion in Turkey. Knowing the importance of public agenda in the agenda setting and building process, this paper aims to reveal the social and political tendencies of the Turkish public. For that purpose, a survey will be carried out in December of 2014 to determine the social and political trends in Turkey for that same year. The subjects for the study, which utilize a questionairre in one-on-one interviews, will include 1,000 individuals aged 18 years and older from 26 cities representing general population. A stratified random sampling frame will be used. The topics covered by the survey include: The most important current problem in Turkey; the Economy; Terror; Approaches to the Kurdish Issue; Evaluations of the Government and Opposition Parties; Evaluations of Institutional Efficiency; Foreign Policy; the Judicial System/Constitution; Democracy and the Media; and, Social Relations/Life in Turkey. Since the beginning of the 21st century, Turkey has been undergoing a rapid transformation. The reflections of the changes can be seen in all areas from economics to politics. It is my hope that findings of this study may shed light on the important aspects of institutions, variables setting the agenda, and formation process of public opinion in Turkey.Keywords: public opinion, media, agenda setting, information diffusion, government, freedom, Turkey
Procedia PDF Downloads 467791 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards
Authors: Nadezhda Kvatashidze, Elena Kharabadze
Abstract:
It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value
Procedia PDF Downloads 320790 Effects of Irregular Migration from Different Aspects of Security
Authors: Muzaffer Topgul, Hasan Atac
Abstract:
In case of explaining the migration concept, although it is not a new phenomenon, it is easy to understand that communities have migrated for variety of reasons such as natural disasters, famine, wars, economic problems, and several theories have been put forth to define and find solution for migration within its changing nature. Examining of migration theories denotes that the circumstances under which they appear reflect political, social, and economic conditions of the age they appear. In this day and time, security is considered not only from military perspective but also from economic, political, sociological dimensions. Based on the changing security environment new impacts of migration has occurred; the migration is proceed to be conferred as a type of war, qualified as a transnational crime because of its outcomes and interpreted in a different dimension owing to its effects on the health and education areas. Social security dimension in the context of expanding concept of security; when dealing with the safety of people and social groups with the assumption that national unity and identity are threatened, it sees immigrants as a source of threat. The human security assesses the safety of individuals in terms of survival and quality of life. Changes in the standard of living under the influence of immigrants and possible terrorist acts can be seen as a threat source in this type of security. Economic security of the individuals and the regional changes at the micro level created by the immigrants are covered issues of economic security. Due to the factors such as terrorism and civil war, the increasing numbers of displaced people who have taken refugee status affect the countries, whether it is near or far to the crisis areas, in the new and different dimensions of security day by day. In this study, the term of immigration through the eyes of national and international law will be evaluated, the place of the irregular and illegal immigration in the changing security sphere will be revealed and the effects of the irregular migration to short-term, mid-term and long-term security issues will be assessed through human and social security aspects. In order to analyze the threats for the human security; the parameters such as living conditions of the immigrants, the ratio of the genders, birth rate occasions, the education circumstances of the immigrant children and the effects of the illegal passing on the public order will be evaluated. The outcomes of the problem areas for the human security and the demographic alteration resulting from the human flow of displaced people will be discussed thorough social security extent. The fizzling economic diversity, which has shown up by irregular migration, will be presented within the scope of economic dimension of security.Keywords: irregular migration, the changing dimensions of security, human security, social security
Procedia PDF Downloads 336789 iPSCs More Effectively Differentiate into Neurons on PLA Scaffolds with High Adhesive Properties for Primary Neuronal Cells
Authors: Azieva A. M., Yastremsky E. V., Kirillova D. A., Patsaev T. D., Sharikov R. V., Kamyshinsky R. A., Lukanina K. I., Sharikova N. A., Grigoriev T. E., Vasiliev A. L.
Abstract:
Adhesive properties of scaffolds, which predominantly depend on the chemical and structural features of their surface, play the most important role in tissue engineering. The basic requirements for such scaffolds are biocompatibility, biodegradation, high cell adhesion, which promotes cell proliferation and differentiation. In many cases, synthetic polymers scaffolds have proven advantageous because they are easy to shape, they are tough, and they have high tensile properties. The regeneration of nerve tissue still remains a big challenge for medicine, and neural stem cells provide promising therapeutic potential for cell replacement therapy. However, experiments with stem cells have their limitations, such as low level of cell viability and poor control of cell differentiation. Whereas the study of already differentiated neuronal cell culture obtained from newborn mouse brain is limited only to cell adhesion. The growth and implantation of neuronal culture requires proper scaffolds. Moreover, the polymer scaffolds implants with neuronal cells could demand specific morphology. To date, it has been proposed to use numerous synthetic polymers for these purposes, including polystyrene, polylactic acid (PLA), polyglycolic acid, and polylactide-glycolic acid. Tissue regeneration experiments demonstrated good biocompatibility of PLA scaffolds, despite the hydrophobic nature of the compound. Problem with poor wettability of the PLA scaffold surface could be overcome in several ways: the surface can be pre-treated by poly-D-lysine or polyethyleneimine peptides; roughness and hydrophilicity of PLA surface could be increased by plasma treatment, or PLA could be combined with natural fibers, such as collagen or chitosan. This work presents a study of adhesion of both induced pluripotent stem cells (iPSCs) and mouse primary neuronal cell culture on the polylactide scaffolds of various types: oriented and non-oriented fibrous nonwoven materials and sponges – with and without the effect of plasma treatment and composites with collagen and chitosan. To evaluate the effect of different types of PLA scaffolds on the neuronal differentiation of iPSCs, we assess the expression of NeuN in differentiated cells through immunostaining. iPSCs more effectively differentiate into neurons on PLA scaffolds with high adhesive properties for primary neuronal cells.Keywords: PLA scaffold, neurons, neuronal differentiation, stem cells, polylactid
Procedia PDF Downloads 84788 Environmental Management Accounting Practices and Policies within the Higher Education Sector: An Exploratory Study of the University of KwaZulu Natal
Authors: Kiran Baldavoo, Mishelle Doorasamy
Abstract:
Universities have a role to play in the preservation of the environment, and the study attempted to evaluate the environmental management accounting (EMA) processes at UKZN. UKZN, a South African university, generates the same direct and indirect environmental impacts as the higher education sector worldwide. This is significant within the context of the South African environment which is constantly plagued by having to effectively manage the already scarce resources of water and energy, evident through the imposition of water and energy restrictions over the recent years. The study’s aim is to increase awareness of having a structured approach to environmental management in order to achieve the strategic environmental goals of the university. The research studied the experiences of key managers within UKZN, with the purpose of exploring the potential factors which influence the decision to adopt and apply EMA within the higher education sector. The study comprised two objectives, namely understanding the current state of accounting practices for managing major environmental costs and identifying factors influencing EMA adoption within the university. The study adopted a case study approach, comprising semi-structured interviews of key personnel involved in Management Accounting, Environmental Management, and Academic Schools within the university. Content analysis was performed on the transcribed interview data. A Theoretical Framework derived from literature was adopted to guide data collection and focus the study. Contingency and Institutional theory was the resultant basis of the derived framework. The findings of the first objective revealed that there was a distinct lack of EMA utilization within the university. There was no distinct policy on EMA, resulting in minimal environmental cost information being brought to the attention of senior management. The university embraced the principles of environmental sustainability; however, efforts to improve internal environmental accountability primarily from an accounting perspective was absent. The findings of the second objective revealed that five key barriers contributed to the lack of EMA utilization within the university. The barriers being attitudinal, informational, institutional, technological, and lack of incentives (financial). The results and findings of this study supported the use and application of EMA within the higher education sector. Participants concurred that EMA was underutilized and if implemented, would realize significant benefits for both the university and environment. Environmental management accounting is being widely acknowledged as a key management tool that can facilitate improved financial and environmental performance via the concept of enhanced environmental accountability. Historically research has been concentrated primarily on the manufacturing industry, due to it generating the greatest proportion of environmental impacts. Service industries are also an integral component of environmental management as they contribute significant environmental impacts, both direct and indirect. Educational institutions such as universities form part of the service sector and directly impact on the environment through the consumption of paper, energy, and water and solid waste generated, with the associated demands.Keywords: environmental management accounting, environmental impacts, higher education, Southern Africa
Procedia PDF Downloads 124787 A Randomised Controlled Trial and Process Evaluation of the Lifestart Parenting Programme
Authors: Sharon Millen, Sarah Miller, Laura Dunne, Clare McGeady, Laura Neeson
Abstract:
This paper presents the findings from a randomised controlled trial (RCT) and process evaluation of the Lifestart parenting programme. Lifestart is a structured child-centred programme of information and practical activity for parents of children aged from birth to five years of age. It is delivered to parents in their own homes by trained, paid family visitors and it is offered to parents regardless of their social, economic or other circumstances. The RCT evaluated the effectiveness of the programme and the process evaluation documented programme delivery and included a qualitative exploration of parent and child outcomes. 424 parents and children participated in the RCT: 216 in the intervention group and 208 in the control group across the island of Ireland. Parent outcomes included: parental knowledge of child development, parental efficacy, stress, social support, parenting skills and embeddedness in the community. Child outcomes included cognitive, language and motor development and social-emotional and behavioural development. Both groups were tested at baseline (when children were less than 1 year old), mid-point (aged 3) and at post-test (aged 5). Data were collected during a home visit, which took two hours. The process evaluation consisted of interviews with parents (n=16 at baseline and end-point), and focus groups with Lifestart Coordinators (n=9) and Family Visitors (n=24). Quantitative findings from the RCT indicated that, compared to the control group, parents who received the Lifestart programme reported reduced parenting-related stress, increased knowledge of their child’s development, and improved confidence in their parenting role. These changes were statistically significant and consistent with the hypothesised pathway of change depicted in the logic model. There was no evidence of any change in parents’ embeddedness in the community. Although four of the five child outcomes showed small positive change for children who took part in the programme, these were not statistically significant and there is no evidence that the programme improves child cognitive and non-cognitive skills by immediate post-test. The qualitative process evaluation highlighted important challenges related to conducting trials of this magnitude and design in the general population. Parents reported that a key incentive to take part in study was receiving feedback from the developmental assessment, which formed part of the data collection. This highlights the potential importance of appropriate incentives in relation to recruitment and retention of participants. The interviews with intervention parents indicated that one of the first changes they experienced as a result of the Lifestart programme was increased knowledge and confidence in their parenting ability. The outcomes and pathways perceived by parents and described in the interviews are also consistent with the findings of the RCT and the theory of change underpinning the programme. This hypothesises that improvement in parental outcomes, arising as a consequence of the programme, mediate the change in child outcomes. Parents receiving the Lifestart programme reported great satisfaction with and commitment to the programme, with the role of the Family Visitor being identified as one of the key components of the programme.Keywords: parent-child relationship, parental self-efficacy, parental stress, school readiness
Procedia PDF Downloads 444786 Analysis of Waterjet Propulsion System for an Amphibious Vehicle
Authors: Nafsi K. Ashraf, C. V. Vipin, V. Anantha Subramanian
Abstract:
This paper reports the design of a waterjet propulsion system for an amphibious vehicle based on circulation distribution over the camber line for the sections of the impeller and stator. In contrast with the conventional waterjet design, the inlet duct is straight for water entry parallel and in line with the nozzle exit. The extended nozzle after the stator bowl makes the flow more axial further improving thrust delivery. Waterjet works on the principle of volume flow rate through the system and unlike the propeller, it is an internal flow system. The major difference between the propeller and the waterjet occurs at the flow passing the actuator. Though a ducted propeller could constitute the equivalent of waterjet propulsion, in a realistic situation, the nozzle area for the Waterjet would be proportionately larger to the inlet area and propeller disc area. Moreover, the flow rate through impeller disk is controlled by nozzle area. For these reasons the waterjet design is based on pump systems rather than propellers and therefore it is important to bring out the characteristics of the flow from this point of view. The analysis is carried out using computational fluid dynamics. Design of waterjet propulsion is carried out adapting the axial flow pump design and performance analysis was done with three-dimensional computational fluid dynamics (CFD) code. With the varying environmental conditions as well as with the necessity of high discharge and low head along with the space confinement for the given amphibious vehicle, an axial pump design is suitable. The major problem of inlet velocity distribution is the large variation of velocity in the circumferential direction which gives rise to heavy blade loading that varies with time. The cavitation criteria have also been taken into account as per the hydrodynamic pump design. Generally, waterjet propulsion system can be parted into the inlet, the pump, the nozzle and the steering device. The pump further comprises an impeller and a stator. Analytical and numerical approaches such as RANSE solver has been undertaken to understand the performance of designed waterjet propulsion system. Unlike in case of propellers the analysis was based on head flow curve with efficiency and power curves. The modeling of the impeller is performed using rigid body motion approach. The realizable k-ϵ model has been used for turbulence modeling. The appropriate boundary conditions are applied for the domain, domain size and grid dependence studies are carried out.Keywords: amphibious vehicle, CFD, impeller design, waterjet propulsion
Procedia PDF Downloads 228785 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions
Authors: Erva Akin
Abstract:
– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.Keywords: artificial intelligence, copyright, data governance, machine learning
Procedia PDF Downloads 83784 Longitudinal impact on Empowerment for Ugandan Women with Post-Primary Education
Authors: Shelley Jones
Abstract:
Assumptions abound that education for girls will, as a matter of course, lead to their economic empowerment as women; yet. little is known about the ways in which schooling for girls, who traditionally/historically would not have had opportunities for post-primary, or perhaps even primary education – such as the participants in this study based in rural Uganda - in reality, impacts their economic situations. There is a need forlongitudinal studies in which women share experiences, understandings, and reflections of their lives that can inform our knowledge of this. In response, this paper reports on stage four of a longitudinal case study (2004-2018) focused on education and empowerment for girls and women in rural Uganda, in which 13 of the 15 participants from the original study participated. This paper understands empowerment as not simply increased opportunities (e.g., employment) but also real gains in power, freedoms that enable agentive action, and authentic and viable choices/alternatives that offer ‘exit options’ from unsatisfactory situations. As with the other stages, this study used a critical, postmodernist, global feminist ethnographic methodology, multimodal and qualitative data collection. Participants participated in interviews, focus group discussions, and a two-day workshop, which explored their understandings of how/if they understood post-primary education to have contributed to their economic empowerment. A constructivist grounded theory approach was used for data analysis to capture major themes. Findings indicate that although all participants believe that post-primary education provided them with economic opportunities they would not have had otherwise, the parameters of their economic empowerment were severely constrained by historic and extant sociocultural, economic, political, and institutional structures that continue to disempower girls and women, as well as additional financial responsibilities that they assumed to support others. Even though the participants had post-primary education, and they were able to obtain employment or operate their own businesses that they would not likely have been able to do without post-primary education, the majority of the participants’ incomes were not sufficient to elevate them financially above the extreme poverty level, especially as many were single mothers and the sole income earners in their households. Furthermore, most deemed their working conditions unsatisfactory and their positions precarious; they also experienced sexual harassment and abuse in the labour force. Additionally, employment for the participants resulted in a double work burden: long days at work, surrounded by many hours of domestic work at home (which, even if they had spousal partners, still fell almost exclusively to women). In conclusion, although the participants seem to have experienced some increase in economic empowerment, largely due to skills, knowledge, and qualifications gained at the post-primary level, numerous barriers prevented them from maximizing their capabilities and making significant gains in empowerment. There is need, in addition to providing education (primary, secondary, and tertiary) to girls, to address systemic gender inequalities that mitigate against women’s empowerment, as well as opportunities and freedom for women to come together and demand fair pay, reasonable working conditions, and benefits, freedom from gender-based harassment and assault in the workplace, as well as advocate for equal distribution of domestic work as a cultural change.Keywords: girls' post-primary education, women's empowerment, uganda, employment
Procedia PDF Downloads 146783 Modern Technology-Based Methods in Neurorehabilitation for Social Competence Deficit in Children with Acquired Brain Injury
Authors: M. Saard, A. Kolk, K. Sepp, L. Pertens, L. Reinart, C. Kööp
Abstract:
Introduction: Social competence is often impaired in children with acquired brain injury (ABI), but evidence-based rehabilitation for social skills has remained undeveloped. Modern technology-based methods create effective and safe learning environments for pediatric social skills remediation. The aim of the study was to implement our structured model of neuro rehab for socio-cognitive deficit using multitouch-multiuser tabletop (MMT) computer-based platforms and virtual reality (VR) technology. Methods: 40 children aged 8-13 years (yrs) have participated in the pilot study: 30 with ABI -epilepsy, traumatic brain injury and/or tic disorder- and 10 healthy age-matched controls. From the patients, 12 have completed the training (M = 11.10 yrs, SD = 1.543) and 20 are still in training or in the waiting-list group (M = 10.69 yrs, SD = 1.704). All children performed the first individual and paired assessments. For patients, second evaluations were performed after the intervention period. Two interactive applications were implemented into rehabilitation design: Snowflake software on MMT tabletop and NoProblem on DiamondTouch Table (DTT), which allowed paired training (2 children at once). Also, in individual training sessions, HTC Vive VR device was used with VR metaphors of difficult social situations to treat social anxiety and train social skills. Results: At baseline (B) evaluations, patients had higher deficits in executive functions on the BRIEF parents’ questionnaire (M = 117, SD = 23.594) compared to healthy controls (M = 22, SD = 18.385). The most impaired components of social competence were emotion recognition, Theory of Mind skills (ToM), cooperation, verbal/non-verbal communication, and pragmatics (Friendship Observation Scale scores only 25-50% out of 100% for patients). In Sentence Completion Task and Spence Anxiety Scale, the patients reported a lack of friends, behavioral problems, bullying in school, and social anxiety. Outcome evaluations: Snowflake on MMT improved executive and cooperation skills and DTT developed communication skills, metacognitive skills, and coping. VR, video modelling and role-plays improved social attention, emotional attitude, gestural behaviors, and decreased social anxiety. NEPSY-II showed improvement in Affect Recognition [B = 7, SD = 5.01 vs outcome (O) = 10, SD = 5.85], Verbal ToM (B = 8, SD = 3.06 vs O = 10, SD = 4.08), Contextual ToM (B = 8, SD = 3.15 vs O = 11, SD = 2.87). ToM Stories test showed an improved understanding of Intentional Lying (B = 7, SD = 2.20 vs O = 10, SD = 0.50), and Sarcasm (B=6, SD = 2.20 vs O = 7, SD = 2.50). Conclusion: Neurorehabilitation based on the Structured Model of Neurorehab for Socio-Cognitive Deficit in children with ABI were effective in social skills remediation. The model helps to understand theoretical connections between components of social competence and modern interactive computerized platforms. We encourage therapists to implement these next-generation devices into the rehabilitation process as MMT and VR interfaces are motivating for children, thus ensuring good compliance. Improving children’s social skills is important for their and their families’ quality of life and social capital.Keywords: acquired brain injury, children, social skills deficit, technology-based neurorehabilitation
Procedia PDF Downloads 120782 Antioxidant and Anti-Lipid Peroxidation Activities of Some Thai Medicinal Plants Traditionally Used for the Treatment of Benign Prostatic Hyperplasia
Authors: Wararut Buncharoen, Kanokporn Saenphet, Supap Saenphet
Abstract:
Benign prostatic hyperplasia (BPH) is a reproductive problem, affecting elderly men worldwide. Several factors particularly free radical reaction and oxidative damage have been contributed to be key factors leading to the development of BPH. A number of medicinal plants with high antioxidant properties are extensively constituted in Thai herbal pharmacopoeia for treating BPH. These plants may prevent or delay the progression of BPH through an antioxidant mechanism. Thus, this study was to prove the antioxidant and anti-lipid peroxidation potential of medicinal plants traditionally used for the treatment of BPH such as Artabotrys harmandii Finet & Gagnep. Miq., Uvaria rufa Blume, Anomianthus dulcis (Dunal) J. Sinclair and Caesalpinia sappan Linn. Antioxidant parameters including free radical (2, 2-azino-bis-(3-ethyl-benzothiazoline-6-sulfonic acid) (ABTS•+), 2, 2-diphenyl-1-picrylhydrazyl (DPPH•) and superoxide) scavenging, ferric reducing power and anti-lipid peroxidation activity were determined in different crude extracts from the stem of these four plants. Total phenolic and ascorbic contents were also investigated. The highest total phenolic content was shown in ethyl acetate crude extract of A. dulcis (510 ± 26.927 µg GAE/g extract) while the highest ascorbic content was found in ethanolic extract of U. rufa (234.727 ± 30.356 µg AAE/g extract). The strongest scavenging activity of ABTS•+ and DPPH• was found in ethyl acetate extract of C. sappan with the IC50 values of 0.469 and 0.255 mg/ml, respectively. The petroleum ether extracts of C. sappan and U. rufa at concentration of 1 mg/ml exhibited high scavenging activity toward superoxide radicals with the inhibition of 37.264 ± 8.672 and 34.434 ± 6.377 %, respectively. Ethyl acetate crude extract of C. sappan displayed the greatest reducing power. The IC50 value of water extract of A. dulcis was 1.326 mg/ml which indicated the strongest activity in the inhibition of lipid-peroxidation among all plant extracts whereas the IC50 value of the standard, butyl hydroxyl toluene was 1.472 µg/ml. Regarding all the obtained results, it can be concluded that the stem of A. dulcis, U. rufa and C. sappan are the potential natural antioxidants and could have an importance as therapeutic agents in the preventing free radicals and oxidative damage related diseases including BPH.Keywords: anti-lipid peroxidation, antioxidant, benign prostatic hyperplasia, Thai medicinal plants
Procedia PDF Downloads 480781 Crisis Management and Corporate Political Activism: A Qualitative Analysis of Online Reactions toward Tesla
Authors: Roxana D. Maiorescu-Murphy
Abstract:
In the US, corporations have recently embraced political stances in an attempt to respond to the external pressure exerted by activist groups. To date, research in this area remains in its infancy, and few studies have been conducted on the way stakeholder groups respond to corporate political advocacy in general and in the immediacy of such a corporate announcement in particular. The current study aims to fill in this research void. In addition, the study contributes to an emerging trajectory in the field of crisis management by focusing on the delineation between crises (unexpected events related to products and services) and scandals (crises that spur moral outrage). The present study looked at online reactions in the aftermath of Elon Musk’s endorsement of the Republican party on Twitter. Two data sets were collected from Twitter following two political endorsements made by Elon Musk on May 18, 2022, and June 15, 2022, respectively. The total sample of analysis stemming from the data two sets consisted of N=1,374 user comments written as a response to Musk’s initial tweets. Given the paucity of studies in the preceding research areas, the analysis employed a case study methodology, used in circumstances in which the phenomena to be studied had not been researched before. According to the case study methodology, which answers the questions of how and why a phenomenon occurs, this study responded to the research questions of how online users perceived Tesla and why they did so. The data were analyzed in NVivo by the use of the grounded theory methodology, which implied multiple exposures to the text and the undertaking of an inductive-deductive approach. Through multiple exposures to the data, the researcher ascertained the common themes and subthemes in the online discussion. Each theme and subtheme were later defined and labeled. Additional exposures to the text ensured that these were exhaustive. The results revealed that the CEO’s political endorsements triggered moral outrage, leading to Tesla’s facing a scandal as opposed to a crisis. The moral outrage revolved around the stakeholders’ predominant rejection of a perceived intrusion of an influential figure on a domain reserved for voters. As expected, Musk’s political endorsements led to polarizing opinions, and those who opposed his views engaged in online activism aimed to boycott the Tesla brand. These findings reveal that the moral outrage that characterizes a scandal requires communication practices that differ from those that practitioners currently borrow from the field of crisis management. Specifically, because scandals flourish in online settings, practitioners should regularly monitor stakeholder perceptions and address them in real-time. While promptness is essential when managing crises, it becomes crucial to respond immediately as a scandal is flourishing online. Finally, attempts should be made to distance a brand, its products, and its CEO from the latter’s political views.Keywords: crisis management, communication management, Tesla, corporate political activism, Elon Musk
Procedia PDF Downloads 91780 Assessment of Physical Learning Environments in ECE: Interdisciplinary and Multivocal Innovation for Chilean Kindergartens
Authors: Cynthia Adlerstein
Abstract:
Physical learning environment (PLE) has been considered, after family and educators, as the third teacher. There have been conflicting and converging viewpoints on the role of the physical dimensions of places to learn, in facilitating educational innovation and quality. Despite the different approaches, PLE has been widely recognized as a key factor in the quality of the learning experience , and in the levels of learning achievement in ECE . The conceptual frameworks of the field assume that PLE consists of a complex web of factors that shape the overall conditions for learning, and that much more interdisciplinary and complementary methodologies of research and development are required. Although the relevance of PLE attracts a broad international consensus, in Chile it remains under-researched and weakly regulated by public policy. Gaining deeper contextual understanding and more thoughtfully-designed recommendations require the use of innovative assessment tools that cross cultural and disciplinary boundaries to produce new hybrid approaches and improvements. When considering a PLE-based change process for ECE improvement, a central question is what dimensions, variables and indicators could allow a comprehensive assessment of PLE in Chilean kindergartens? Based on a grounded theory social justice inquiry, we adopted a mixed method design, that enabled a multivocal and interdisciplinary construction of data. By using in-depth interviews, discussion groups, questionnaires, and documental analysis, we elicited the PLE discourses of politicians, early childhood practitioners, experts in architectural design and ergonomics, ECE stakeholders, and 3 to 5 year olds. A constant comparison method enabled the construction of the dimensions, variables and indicators through which PLE assessment is possible. Subsequently, the instrument was applied in a sample of 125 early childhood classrooms, to test reliability (internal consistency) and validity (content and construct). As a result, an interdisciplinary and multivocal tool for assessing physical learning environments was constructed and validated, for Chilean kindergartens. The tool is structured upon 7 dimensions (wellbeing, flexible, empowerment, inclusiveness, symbolically meaningful, pedagogically intentioned, institutional management) 19 variables and 105 indicators that are assessed through observation and registration on a mobile app. The overall reliability of the instrument is .938 while the consistency of each dimension varies between .773 (inclusive) and .946 (symbolically meaningful). The validation process through expert opinion and factorial analysis (chi-square test) has shown that the dimensions of the assessment tool reflect the factors of physical learning environments. The constructed assessment tool for kindergartens highlights the significance of the physical environment in early childhood educational settings. The relevance of the instrument relies in its interdisciplinary approach to PLE and in its capability to guide innovative learning environments, based on educational habitability. Though further analysis are required for concurrent validation and standardization, the tool has been considered by practitioners and ECE stakeholders as an intuitive, accessible and remarkable instrument to arise awareness on PLE and on equitable distribution of learning opportunities.Keywords: Chilean kindergartens, early childhood education, physical learning environment, third teacher
Procedia PDF Downloads 357779 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method
Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat
Abstract:
Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.Keywords: electric discharge machining (EDM), modeling, optimization, CCRD
Procedia PDF Downloads 341778 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting
Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero
Abstract:
In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling
Procedia PDF Downloads 135777 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures
Authors: Jungyeol Hong, Dongjoo Park
Abstract:
The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership
Procedia PDF Downloads 177776 Valorization of Mineralogical Byproduct TiO₂ Using Photocatalytic Degradation of Organo-Sulfur Industrial Effluent
Authors: Harish Kuruva, Vedasri Bai Khavala, Tiju Thomas, K. Murugan, B. S. Murty
Abstract:
Industries are growing day to day to increase the economy of the country. The biggest problem with industries is wastewater treatment. Releasing these wastewater directly into the river is more harmful to human life and a threat to aquatic life. These industrial effluents contain many dissolved solids, organic/inorganic compounds, salts, toxic metals, etc. Phenols, pesticides, dioxins, herbicides, pharmaceuticals, and textile dyes were the types of industrial effluents and more challenging to degrade eco-friendly. So many advanced techniques like electrochemical, oxidation process, and valorization have been applied for industrial wastewater treatment, but these are not cost-effective. Industrial effluent degradation is complicated compared to commercially available pollutants (dyes) like methylene blue, methylene orange, rhodamine B, etc. TiO₂ is one of the widely used photocatalysts which can degrade organic compounds using solar light and moisture available in the environment (organic compounds converted to CO₂ and H₂O). TiO₂ is widely studied in photocatalysis because of its low cost, non-toxic, high availability, and chemically and physically stable in the atmosphere. This study mainly focused on valorizing the mineralogical product TiO₂ (IREL, India). This mineralogical graded TiO₂ was characterized and compared with its structural and photocatalytic properties (industrial effluent degradation) with the commercially available Degussa P-25 TiO₂. It was testified that this mineralogical TiO₂ has the best photocatalytic properties (particle shape - spherical, size - 30±5 nm, surface area - 98.19 m²/g, bandgap - 3.2 eV, phase - 95% anatase, and 5% rutile). The industrial effluent was characterized by TDS (total dissolved solids), ICP-OES (inductively coupled plasma – optical emission spectroscopy), CHNS (Carbon, Hydrogen, Nitrogen, and sulfur) analyzer, and FT-IR (fourier-transform infrared spectroscopy). It was observed that it contains high sulfur (S=11.37±0.15%), organic compounds (C=4±0.1%, H=70.25±0.1%, N=10±0.1%), heavy metals, and other dissolved solids (60 g/L). However, the organo-sulfur industrial effluent was degraded by photocatalysis with the industrial mineralogical product TiO₂. In this study, the industrial effluent pH value (2.5 to 10), catalyst concentration (50 to 150 mg) were varied, and effluent concentration (0.5 Abs) and light exposure time (2 h) were maintained constant. The best degradation is about 80% of industrial effluent was achieved at pH 5 with a concentration of 150 mg - TiO₂. The FT-IR results and CHNS analyzer confirmed that the sulfur and organic compounds were degraded.Keywords: wastewater treatment, industrial mineralogical product TiO₂, photocatalysis, organo-sulfur industrial effluent
Procedia PDF Downloads 116775 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 144