Search results for: coordinate measuring machines (CMM)
315 A Prediction Method of Pollutants Distribution Pattern: Flare Motion Using Computational Fluid Dynamics (CFD) Fluent Model with Weather Research Forecast Input Model during Transition Season
Authors: Benedictus Asriparusa, Lathifah Al Hakimi, Aulia Husada
Abstract:
A large amount of energy is being wasted by the release of natural gas associated with the oil industry. This release interrupts the environment particularly atmosphere layer condition globally which contributes to global warming impact. This research presents an overview of the methods employed by researchers in PT. Chevron Pacific Indonesia in the Minas area to determine a new prediction method of measuring and reducing gas flaring and its emission. The method emphasizes advanced research which involved analytical studies, numerical studies, modeling, and computer simulations, amongst other techniques. A flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process releases emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the chemical composition of air and environment around the boundary layer mainly during transition season. Transition season in Indonesia is absolutely very difficult condition to predict its pattern caused by the difference of two air mass conditions. This paper research focused on transition season in 2013. A simulation to create the new pattern of the pollutants distribution is needed. This paper has outlines trends in gas flaring modeling and current developments to predict the dominant variables in the pollutants distribution. A Fluent model is used to simulate the distribution of pollutants gas coming out of the stack, whereas WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. Based on the running model, the most influence factor was wind speed. The goal of the simulation is to predict the new pattern based on the time of fastest wind and slowest wind occurs for pollutants distribution. According to the simulation results, it can be seen that the fastest wind (last of March) moves pollutants in a horizontal direction and the slowest wind (middle of May) moves pollutants vertically. Besides, the design of flare stack in compliance according to EPA Oil and Gas Facility Stack Parameters likely shows pollutants concentration remains on the under threshold NAAQS (National Ambient Air Quality Standards).Keywords: flare motion, new prediction, pollutants distribution, transition season, WRF model
Procedia PDF Downloads 555314 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach
Authors: Nina Ponikvar, Katja Zajc Kejžar
Abstract:
While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia
Procedia PDF Downloads 75313 Corrosion Protective Coatings in Machines Design
Authors: Cristina Diaz, Lucia Perez, Simone Visigalli, Giuseppe Di Florio, Gonzalo Fuentes, Roberto Canziani, Paolo Gronchi
Abstract:
During the last 50 years, the selection of materials is one of the main decisions in machine design for different industrial applications. It is due to numerous physical, chemical, mechanical and technological factors to consider in it. Corrosion effects are related with all of these factors and impact in the life cycle, machine incidences and the costs for the life of the machine. Corrosion affects the deterioration or destruction of metals due to the reaction with the environment, generally wet. In food industry, dewatering industry, concrete industry, paper industry, etc. corrosion is an unsolved problem and it might introduce some alterations of some characteristics in the final product. Nowadays, depending on the selected metal, its surface and its environment of work, corrosion prevention might be a change of metal, use a coating, cathodic protection, use of corrosion inhibitors, etc. In the vast majority of the situations, use of a corrosion resistant material or in its defect, a corrosion protection coating is the solution. Stainless steels are widely used in machine design, because of their strength, easily cleaned capacity, corrosion resistance and appearance. Typical used are AISI 304 and AISI 316. However, their benefits don’t fit every application, and some coatings are required against corrosion such as some paintings, galvanizing, chrome plating, SiO₂, TiO₂ or ZrO₂ coatings, etc. In this work, some coatings based in a bilayer made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium or Titanium-Zirconium, have been developed used magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology, for trying to reduce corrosion effects on AISI 304, AISI 316 and comparing it with Titanium alloy substrates. Ti alloy display exceptional corrosion resistance to chlorides, sour and oxidising acidic media and seawater. In this study, Ti alloy (99%) has been included for comparison with coated AISI 304 and AISI 316 stainless steel. Corrosion tests were conducted by a Gamry Instrument under ASTM G5-94 standard, using different electrolytes such as tomato salsa, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl for testing corrosion in different industrial environments. In general, in all tested environments, the results showed an improvement of corrosion resistance of all coated AISI 304 and AISI 316 stainless steel substrates when they were compared to uncoated stainless steel substrates. After that, comparing these results with corrosion studies on uncoated Ti alloy substrate, it was observed that in some cases, coated stainless steel substrates, reached similar current density that uncoated Ti alloy. Moreover, Titanium-Zirconium and Titanium-Tantalum coatings showed for all substrates in study including coated Ti alloy substrates, a reduction in current density more than two order in magnitude. As conclusion, Ti-Ta, Ti-Zr, Ti-Nb and Ti-Hf coatings have been developed for improving corrosion resistance of AISI 304 and AISI 316 materials. After corrosion tests in several industry environments, substrates have shown improvements on corrosion resistance. Similar processes have been carried out in Ti alloy (99%) substrates. Coated AISI 304 and AISI 316 stainless steel, might reach similar corrosion protection on the surface than uncoated Ti alloy (99%). Moreover, coated Ti Alloy (99%) might increase its corrosion resistance using these coatings.Keywords: coatings, corrosion, PVD, stainless steel
Procedia PDF Downloads 158312 Integrating Data Mining with Case-Based Reasoning for Diagnosing Sorghum Anthracnose
Authors: Mariamawit T. Belete
Abstract:
Cereal production and marketing are the means of livelihood for millions of households in Ethiopia. However, cereal production is constrained by technical and socio-economic factors. Among the technical factors, cereal crop diseases are the major contributing factors to the low yield. The aim of this research is to develop an integration of data mining and knowledge based system for sorghum anthracnose disease diagnosis that assists agriculture experts and development agents to make timely decisions. Anthracnose diagnosing systems gather information from Melkassa agricultural research center and attempt to score anthracnose severity scale. Empirical research is designed for data exploration, modeling, and confirmatory procedures for testing hypothesis and prediction to draw a sound conclusion. WEKA (Waikato Environment for Knowledge Analysis) was employed for the modeling. Knowledge based system has come across a variety of approaches based on the knowledge representation method; case-based reasoning (CBR) is one of the popular approaches used in knowledge-based system. CBR is a problem solving strategy that uses previous cases to solve new problems. The system utilizes hidden knowledge extracted by employing clustering algorithms, specifically K-means clustering from sampled anthracnose dataset. Clustered cases with centroid value are mapped to jCOLIBRI, and then the integrator application is created using NetBeans with JDK 8.0.2. The important part of a case based reasoning model includes case retrieval; the similarity measuring stage, reuse; which allows domain expert to transfer retrieval case solution to suit for the current case, revise; to test the solution, and retain to store the confirmed solution to the case base for future use. Evaluation of the system was done for both system performance and user acceptance. For testing the prototype, seven test cases were used. Experimental result shows that the system achieves an average precision and recall values of 70% and 83%, respectively. User acceptance testing also performed by involving five domain experts, and an average of 83% acceptance is achieved. Although the result of this study is promising, however, further study should be done an investigation on hybrid approach such as rule based reasoning, and pictorial retrieval process are recommended.Keywords: sorghum anthracnose, data mining, case based reasoning, integration
Procedia PDF Downloads 81311 Effects of Foreign-language Learning on Bilinguals' Production in Both Their Languages
Authors: Natalia Kartushina
Abstract:
Foreign (second) language (L2) learning is highly promoted in modern society. Students are encouraged to study abroad (SA) to achieve the most effective learning outcomes. However, L2 learning has side effects for native language (L1) production, as L1 sounds might show a drift from the L1 norms towards those of the L2, and this, even after a short period of L2 learning. L1 assimilatory drift has been attributed to a strong perceptual association between similar L1 and L2 sounds in the mind of L2 leaners; thus, a change in the production of an L2 target leads to the change in the production of the related L1 sound. However, nowadays, it is quite common that speakers acquire two languages from birth, as, for example, it is the case for many bilingual communities (e.g., Basque and Spanish in the Basque Country). Yet, it remains to be established how FL learning affects native production in individuals who have two native languages, i.e., in simultaneous or very early bilinguals. Does FL learning (here a third language, L3) affect bilinguals’ both languages or only one? What factors determine which of the bilinguals’ languages is more susceptible to change? The current study examines the effects of L3 (English) learning on the production of vowels in the two native languages of simultaneous Spanish-Basque bilingual adolescents enrolled into the Erasmus SA English program. Ten bilingual speakers read five Spanish and Basque consonant-vowel-consonant-vowel words two months before their SA and the next day after their arrival back to Spain. Each word contained the target vowel in the stressed syllable and was repeated five times. Acoustic analyses measuring vowel openness (F1) and backness (F2) were performed. Two possible outcomes were considered. First, we predicted that L3 learning would affect the production of only one language and this would be the language that would be used the most in contact with English during the SA period. This prediction stems from the results of recent studies showing that early bilinguals have separate phonological systems for each of their languages; and that late FL learner (as it is the case of our participants), who tend to use their L1 in language-mixing contexts, have more L2-accented L1 speech. The second possibility stated that L3 learning would affect both of the bilinguals’ languages in line with the studies showing that bilinguals’ L1 and L2 phonologies interact and constantly co-influence each other. The results revealed that speakers who used both languages equally often (balanced users) showed an F1 drift in both languages toward the F1 of the English vowel space. Unbalanced speakers, however, showed a drift only in the less used language. The results are discussed in light of recent studies suggesting that the amount of language use is a strong predictor of the authenticity in speech production with less language use leading to more foreign-accented speech and, eventually, to language attrition.Keywords: language-contact, multilingualism, phonetic drift, bilinguals' production
Procedia PDF Downloads 109310 Hansen Solubility Parameter from Surface Measurements
Authors: Neveen AlQasas, Daniel Johnson
Abstract:
Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied filmsKeywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements
Procedia PDF Downloads 94309 Complaint Management Mechanism: A Workplace Solution in Development Sector of Bangladesh
Authors: Nusrat Zabeen Islam
Abstract:
Partnership between local Non-Government organizations (NGO) and International development organizations has become an important feature in the development sector of Bangladesh. It is an important challenge for International development organizations to work with local NGOs with proper HR practice. Local NGOs have a lack of quality working environment and this affects the employee’s work experiences and overall performance at individual, partnership with International development organizations and organizational level. Many local development organizations due to the size of the organization and scope do not have a human resource (HR) unit. Inadequate Human Resource Policies, skills, leadership and lack of effective strategy is now a common scenario in Non-Government organization sector of Bangladesh. So corruption, nepotism, and fraud, risk of Political Contribution in office /work space, Sexual/ gender based abuse, insecurity take place in work place of development sector. The Complaint Management Mechanism (CMM) in human resource management could be one way to improve human resource competence in these organizations. The responsibility of Complaint Management Unit (CMU) of an International development organization is to make workplace maltreating, discriminating communities free. The information of impact of CMM was collected through case study of an International organization and some of its partner national organizations in Bangladesh who are engaged in different projects/programs. In this mechanism International development organizations collect complaints from beneficiaries/ staffs by complaint management unit and investigate by segregating the type and mood of the complaint and find out solution to improve the situation within a very short period. A complaint management committee is formed jointly with HR and management personnel. Concerned focal point collect complaints and share with CM unit. By conducting investigation, review of findings, reply back to CM unit and implementation of resolution through this mechanism, a successful bridge of communication and feedback can be established within beneficiaries, staffs and upper management. The overall result of Complaint management mechanism application indicates that by applying CMM accountability and transparency of workplace and workforce in development organization can be increased significantly. Evaluations based on outcomes, and measuring indicators such as productivity, satisfaction, retention, gender equity, proper judgment will guide organizations in building a healthy workforce, and will also clearly articulate the return on investment and justify any need for further funding.Keywords: human resource management in NGOs, challenges in human resource, workplace environment, complaint management mechanism
Procedia PDF Downloads 322308 Predicting Personality and Psychological Distress Using Natural Language Processing
Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi
Abstract:
Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality
Procedia PDF Downloads 78307 From Over-Tourism to Over-Mobility: Understanting the Mobility of Incoming City Users in Barcelona
Authors: José Antonio Donaire Benito, Konstantina Zerva
Abstract:
Historically, cities have been places where people from many nations and cultures have met and settled together, while population flows and density have had a significant impact on urban dynamics. Cities' high density of social, cultural, business offerings, everyday services, and other amenities not intended for tourists draw not only tourists but a wide range of city users as well. With the coordination of city rhythms and the porosity of the community, city users order and frame their urban experience. From one side, recent literature focuses on the shift in urban tourist experience from 'having' a holiday through 'doing' activities to 'becoming' a local by experiencing a part of daily life. On the other hand, there is a debate on the 'touristification of everyday life', where middle and upper class urban dwellers display attitudes and behaviors that are virtually undistinguishable from those of visitors. With the advent of globalization and technological advances, modern society has undergone a radical transformation that has altered mobility patterns within it, blurring the boundaries between tourism and everyday life, work and leisure, and "hosts" and "guests". Additionally, the presence of other 'temporary city' users, such as commuters, digital nomads, second home owners, and migrants, contributes to a more complex transformation of tourist cities. Moving away from this traditional clear distinction between 'hosts' and 'guests', which represents a more static view of tourism, and moving towards a more liquid narrative of mobility, academics on tourism development are embracing the New Mobilities Paradigm. The latter moves beyond the static structures of the modern world and focuses on the ways in which social entities are made up of people, machines, information, and images in a moving system. In light of this fluid interdependence between tourists and guests, a question arises as to whether overtourism, which is considered as the underlying cause of citizens' perception of a lower urban quality of life, is a fair representation of perceived mobility excessiveness, place consumption disruptiveness, and residents displacement. As a representative example of an overtourism narrative, Barcelona was chosen as a study area for this purpose, focusing on the incoming city users to reflect in depth the variety of people who contribute to mobility flows beyond those residents already have. Several statistical data have been analyzed to determine the number of national and international visitors to Barcelona at some point during the day in 2019. Specifically, tracking data gathered from mobile phone users within the city are combined with tourist surveys, urban mobility data, zenithal data capture, and information about the city's attractions. The paper shows that tourists are only a small part of the different incoming city users that daily enter Barcelona; excursionists, commuters, and metropolitans also contribute to a high mobility flow. Based on the diversity of incoming city users and their place consumption, it seems that the city's urban experience is more likely to be impacted by over-mobility tan over-tourism.Keywords: city users, density, new mobilities paradigm, over-tourism.
Procedia PDF Downloads 79306 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text
Authors: Duncan Wallace, M-Tahar Kechadi
Abstract:
In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.Keywords: artificial neural networks, data-mining, machine learning, medical informatics
Procedia PDF Downloads 131305 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter
Authors: Zhu Xinxin, Wang Hui, Yang Kai
Abstract:
Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter
Procedia PDF Downloads 117304 An Econometric Analysis of the Flat Tax Revolution
Authors: Wayne Tarrant, Ethan Petersen
Abstract:
The concept of a flat tax goes back to at least the Biblical tithe. A progressive income tax was first vociferously espoused in a small, but famous, pamphlet in 1848 (although England had an emergency progressive tax for war costs prior to this). Within a few years many countries had adopted the progressive structure. The flat tax was only reinstated in some small countries and British protectorates until Mart Laar was elected Prime Minister of Estonia in 1992. Since Estonia’s adoption of the flat tax in 1993, many other formerly Communist countries have likewise abandoned progressive income taxes. Economists had expectations of what would happen when a flat tax was enacted, but very little work has been done on actually measuring the effect. With a testbed of 21 countries in this region that currently have a flat tax, much comparison is possible. Several countries have retained progressive taxes, giving an opportunity for contrast. There are also the cases of Czech Republic and Slovakia, which have adopted and later abandoned the flat tax. Further, with over 20 years’ worth of economic history in some flat tax countries, we can begin to do some serious longitudinal study. In this paper we consider many economic variables to determine if there are statistically significant differences from before to after the adoption of a flat tax. We consider unemployment rates, tax receipts, GDP growth, Gini coefficients, and market data where the data are available. Comparisons are made through the use of event studies and time series methods. The results are mixed, but we draw statistically significant conclusions about some effects. We also look at the different implementations of the flat tax. In some countries there are equal income and corporate tax rates. In others the income tax has a lower rate, while in others the reverse is true. Each of these sends a clear message to individuals and corporations. The policy makers surely have a desired effect in mind. We group countries with similar policies, try to determine if the intended effect actually occurred, and then report the results. This is a work in progress, and we welcome the suggestion of variables to consider. Further, some of the data from before the fall of the Iron Curtain are suspect. Since there are new ruling regimes in these countries, the methods of computing different statistical measures has changed. Although we first look at the raw data as reported, we also attempt to account for these changes. We show which data seem to be fictional and suggest ways to infer the needed statistics from other data. These results are reported beside those on the reported data. Since there is debate about taxation structure, this paper can help inform policymakers of change the flat tax has caused in other countries. The work shows some strengths and weaknesses of a flat tax structure. Moreover, it provides beginnings of a scientific analysis of the flat tax in practice rather than having discussion based solely upon theory and conjecture.Keywords: flat tax, financial markets, GDP, unemployment rate, Gini coefficient
Procedia PDF Downloads 339303 Assessing the Competence of Oral Surgery Trainees: A Systematic Review
Authors: Chana Pavneet
Abstract:
Background: In more recent years in dentistry, a greater emphasis has been placed on competency-based education (CBE) programmes. Undergraduate and postgraduate curriculums have been reformed to reflect these changes, and adopting a CBE approach has shown to be beneficial to trainees and places an emphasis on continuous lifelong learning. The literature is vast; however, very little work has been done specifically to the assessment of competence in dentistry and even less so in oral surgery. The majority of the literature tends to opinion pieces. Some small-scale studies have been undertaken in this area researching assessment tools which can be used to assess competence in oral surgery. However, there is a lack of general consensus on the preferable assessment methods. The aim of this review is to identify the assessment methods available and their usefulness. Methods: Electronic databases (Medline, Embase, and the Cochrane Database of systematic reviews) were searched. PRISMA guidelines were followed to identify relevant papers. Abstracts of studies were reviewed, and if they met the inclusion criteria, they were included in the review. Papers were reviewed against the critical appraisal skills programme (CASP) checklist and medical education research quality instrument (MERQSI) to assess their quality and identify any bias in a systematic manner. The validity and reliability of each assessment method or tool were assessed. Results: A number of assessment methods were identified, including self-assessment, peer assessment, and direct observation of skills by someone senior. Senior assessment tended to be the preferred method, followed by self-assessment and, finally, peer assessment. The level of training was shown to affect the preferred assessment method, with one study finding peer assessment more useful in postgraduate trainees as opposed to undergraduate trainees. Numerous tools for assessment were identified, including a checklist scale and a global rating scale. Both had their strengths and weaknesses, but the evidence was more favourable for global rating scales in terms of reliability, applicability to more clinical situations, and easier to use for examiners. Studies also looked into trainees’ opinions on assessment tools. Logbooks were not found to be significant in measuring the competence of trainees. Conclusion: There is limited literature exploring the methods and tools which assess the competence of oral surgery trainees. Current evidence shows that the most favourable assessment method and tool may differ depending on the stage of training. More research is required in this area to streamline assessment methods and tools.Keywords: competence, oral surgery, assessment, trainees, education
Procedia PDF Downloads 134302 An Efficient Process Analysis and Control Method for Tire Mixing Operation
Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park
Abstract:
Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process
Procedia PDF Downloads 265301 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology
Authors: Amarendar Reddy Addula
Abstract:
Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.Keywords: artificial intelligence, ethics & human rights issues, laws, international laws
Procedia PDF Downloads 94300 Bioleaching of Metals Contained in Spent Catalysts by Acidithiobacillus thiooxidans DSM 26636
Authors: Andrea M. Rivas-Castillo, Marlenne Gómez-Ramirez, Isela Rodríguez-Pozos, Norma G. Rojas-Avelizapa
Abstract:
Spent catalysts are considered as hazardous residues of major concern, mainly due to the simultaneous presence of several metals in elevated concentrations. Although hydrometallurgical, pyrometallurgical and chelating agent methods are available to remove and recover some metals contained in spent catalysts; these procedures generate potentially hazardous wastes and the emission of harmful gases. Thus, biotechnological treatments are currently gaining importance to avoid the negative impacts of chemical technologies. To this end, diverse microorganisms have been used to assess the removal of metals from spent catalysts, comprising bacteria, archaea and fungi, whose resistance and metal uptake capabilities differ depending on the microorganism tested. Acidophilic sulfur oxidizing bacteria have been used to investigate the biotreatment and extraction of valuable metals from spent catalysts, namely Acidithiobacillus thiooxidans and Acidithiobacillus ferroxidans, as they present the ability to produce leaching agents such as sulfuric acid and sulfur oxidation intermediates. In the present work, the ability of A. thiooxidans DSM 26636 for the bioleaching of metals contained in five different spent catalysts was assessed by growing the culture in modified Starkey mineral medium (with elemental sulfur at 1%, w/v), and 1% (w/v) pulp density of each residue for up to 21 days at 30 °C and 150 rpm. Sulfur-oxidizing activity was periodically evaluated by determining sulfate concentration in the supernatants according to the NMX-k-436-1977 method. The production of sulfuric acid was assessed in the supernatants as well, by a titration procedure using NaOH 0.5 M with bromothymol blue as acid-base indicator, and by measuring pH using a digital potentiometer. On the other hand, Inductively Coupled Plasma - Optical Emission Spectrometry was used to analyze metal removal from the five different spent catalysts by A. thiooxidans DSM 26636. Results obtained show that, as could be expected, sulfuric acid production is directly related to the diminish of pH, and also to highest metal removal efficiencies. It was observed that Al and Fe are recurrently removed from refinery spent catalysts regardless of their origin and previous usage, although these removals may vary from 9.5 ± 2.2 to 439 ± 3.9 mg/kg for Al, and from 7.13 ± 0.31 to 368.4 ± 47.8 mg/kg for Fe, depending on the spent catalyst proven. Besides, bioleaching of metals like Mg, Ni, and Si was also obtained from automotive spent catalysts, which removals were of up to 66 ± 2.2, 6.2±0.07, and 100±2.4, respectively. Hence, the data presented here exhibit the potential of A. thiooxidans DSM 26636 for the simultaneous bioleaching of metals contained in spent catalysts from diverse provenance.Keywords: bioleaching, metal removal, spent catalysts, Acidithiobacillus thiooxidans
Procedia PDF Downloads 140299 Environmental Resilience in Sustainability Outcomes of Spatial-Economic Model Structure on the Topology of Construction Ecology
Authors: Moustafa Osman Mohammed
Abstract:
The resilient and sustainable of construction ecology is essential to world’s socio-economic development. Environmental resilience is crucial in relating construction ecology to topology of spatial-economic model. Sustainability of spatial-economic model gives attention to green business to comply with Earth’s System for naturally exchange patterns of ecosystems. The systems ecology has consistent and periodic cycles to preserve energy and materials flow in Earth’s System. When model structure is influencing communication of internal and external features in system networks, it postulated the valence of the first-level spatial outcomes (i.e., project compatibility success). These instrumentalities are dependent on second-level outcomes (i.e., participant security satisfaction). These outcomes of model are based on measuring database efficiency, from 2015 to 2025. The model topology has state-of-the-art in value-orientation impact and correspond complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic model; develop a set of sustainability indicators associated with model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate environmental resilience. The model is managing and developing schemes from perspective of multiple sources pollutants through the input–output criteria. These criteria are evaluated the external insertions effects to conduct Monte Carlo simulations and analysis for using matrices in a unique spatial structure. The balance “equilibrium patterns” such as collective biosphere features, has a composite index of the distributed feedback flows. These feedback flows have a dynamic structure with physical and chemical properties for gradual prolong of incremental patterns. While these structures argue from system ecology, static loads are not decisive from an artistic/architectural perspective. The popularity of system resilience, in the systems structure related to ecology has not been achieved without the generation of confusion and vagueness. However, this topic is relevant to forecast future scenarios where industrial regions will need to keep on dealing with the impact of relative environmental deviations. The model attempts to unify analytic and analogical structure of urban environments using database software to integrate sustainability outcomes where the process based on systems topology of construction ecology.Keywords: system ecology, construction ecology, industrial ecology, spatial-economic model, systems topology
Procedia PDF Downloads 19298 Effects of Vertimax Training on Agility, Quickness and Acceleration
Authors: Dede Basturk, Metin Kaya, Halil Taskin, Nurtekin Erkmen
Abstract:
In total, 29 students studying in Selçuk University Physical Training and Sports School who are recreationally active participated voluntarilyin this study which was carried out in order to examine effects of Vertimax trainings on agility, quickness and acceleration. 3 groups took their parts in this study as Vertimax training group (N=10), Ordinary training group (N=10) and Control group (N=9). Measurements were carried out in performance laboratory of Selçuk University Physical Training and Sports School. A training program for quickness and agility was followed up for subjects 3 days a week (Monday, Wednesday, Friday) for 8 weeks. Subjects taking their parts in vertimax training group and ordinary training group participated in the training program for quickness and agility. Measurements were applied as pre-test and post-test. Subjects of vertimax training group followed the training program with vertimax device and subjects of ordinary training group followed the training program without vertimax device. As to control group who are recreationally active, they did not participate in any program. 4 gate photocells were used for measuring and measurement of distances was carried out in m. Furthermore, single gate photocell and honi were used for agility test. Measurements started with 15 minutes of warm-up. Acceleration, quickness and agility tests were applied on subjects. 3 measurements were made for each subject at 3 minutes resting intervals. The best rating of three measurements was recorded. 5 m quickness pre-test value of vertimax training groups has been determined as 1,11±0,06 s and post-test value has been determined as 1,06 ± 0,08 s (P<0,05). 5 m quickness pre-test value of ordinary training group has been determined as 1,11±0,06 s and post-test value has been determined as 1,07±0,07 s (P<0,05).5 m quickness pre-test value of control group has been determined as 1,13±0,08 s and post-test value has been determined as 1,10 ± 0,07 s (P>0,05). Upon examination of 10 m acceleration value before and after the training, 10 m acceleration pre-test value of vertimax training group has been determined as 1,82 ± 0,07 s and post-test value has been determined as 1,76±0,83 s (P>0,05). 10 m acceleration pre-test value of ordinary training group has been determined as 1,83±0,05 s and post-test value has been determined as 1,78 ± 0,08 s (P>0,05).10 m acceleration pre-test value of control group has been determined as 1,87±0,11 s and post-test value has been determined as 1,83 ± 0,09 s (P>0,05). Upon examination of 15 m acceleration value before and after the training, 15 m acceleration pre-test value of vertimax training group has been determined as 2,52±0,10 s and post-test value has been determined as 2,46 ± 0,11 s (P>0,05).15 m acceleration pre-test value of ordinary training group has been determined as 2,52±0,05 s and post-test value has been determined as 2,48 ± 0,06 s (P>0,05). 15 m acceleration pre-test value of control group has been determined as 2,55 ± 0,11 s and post-test value has been determined as 2,54 ± 0,08 s (P>0,05).Upon examination of agility performance before and after the training, agility pre-test value of vertimax training group has been determined as 9,50±0,47 s and post-test value has been determined as 9,66 ± 0,47 s (P>0,05). Agility pre-test value of ordinary training group has been determined as 9,99 ± 0,05 s and post-test value has been determined as 9,86 ± 0,40 s (P>0,05). Agility pre-test value of control group has been determined as 9,74 ± 0,45 s and post-test value has been determined as 9,92 ± 0,49 s (P>0,05). Consequently, it has been observed that quickness and acceleration features were developed significantly following 8 weeks of vertimax training program and agility features were not developed significantly. It is suggested that training practices used for the study may be used for situations which may require sudden moves and in order to attain the maximum speed in a short time. Nevertheless, it is also suggested that this training practice does not make contribution in development of moves which may require sudden direction changes. It is suggested that productiveness and innovation may come off in terms of training by using various practices of vertimax trainings.Keywords: vertimax, training, quickness, agility, acceleration
Procedia PDF Downloads 493297 Acrylic Microspheres-Based Microbial Bio-Optode for Nitrite Ion Detection
Authors: Siti Nur Syazni Mohd Zuki, Tan Ling Ling, Nina Suhaity Azmi, Chong Kwok Feng, Lee Yook Heng
Abstract:
Nitrite (NO2-) ion is used prevalently as a preservative in processed meat. Elevated levels of nitrite also found in edible bird’s nests (EBNs). Consumption of NO2- ion at levels above the health-based risk may cause cancer in humans. Spectrophotometric Griess test is the simplest established standard method for NO2- ion detection, however, it requires careful control of pH of each reaction step and susceptible to strong oxidants and dyeing interferences. Other traditional methods rely on the use of laboratory-scale instruments such as GC-MS, HPLC and ion chromatography, which cannot give real-time response. Therefore, it is of significant need for devices capable of measuring nitrite concentration in-situ, rapidly and without reagents, sample pretreatment or extraction step. Herein, we constructed a microspheres-based microbial optode for visual quantitation of NO2- ion. Raoutella planticola, the bacterium expressing NAD(P)H nitrite reductase (NiR) enzyme has been successfully extracted by microbial technique from EBN collected from local birdhouse. The whole cells and the lipophilic Nile Blue chromoionophore were physically absorbed on the photocurable poly(n-butyl acrylate-N-acryloxysuccinimide) [poly (nBA-NAS)] microspheres, whilst the reduced coenzyme NAD(P)H was covalently immobilized on the succinimide-functionalized acrylic microspheres to produce a reagentless biosensing system. Upon the NiR enzyme catalyzes the oxidation of NAD(P)H to NAD(P)+, NO2- ion is reduced to ammonium hydroxide, and that a colour change from blue to pink of the immobilized Nile Blue chromoionophore is perceived as a result of deprotonation reaction increasing the local pH in the microspheres membrane. The microspheres-based optosensor was optimized with a reflectance spectrophotometer at 639 nm and pH 8. The resulting microbial bio-optode membrane could quantify NO2- ion at 0.1 ppm and had a linear response up to 400 ppm. Due to the large surface area to mass ratio of the acrylic microspheres, it allows efficient solid state diffusional mass transfer of the substrate to the bio-recognition phase, and achieve the steady state response as fast as 5 min. The proposed optical microbial biosensor requires no sample pre-treatment step and possesses high stability as the whole cell biocatalyst provides protection to the enzymes from interfering substances, hence it is suitable for measurements in contaminated samples.Keywords: acrylic microspheres, microbial bio-optode, nitrite ion, reflectometric
Procedia PDF Downloads 448296 Bed Evolution under One-Episode Flushing in a Truck Sewer in Paris, France
Authors: Gashin Shahsavari, Gilles Arnaud-Fassetta, Alberto Campisano, Roberto Bertilotti, Fabien Riou
Abstract:
Sewer deposits have been identified as a major cause of dysfunctions in combined sewer systems regarding sewer management, which induces different negative consequents resulting in poor hydraulic conveyance, environmental damages as well as worker’s health. In order to overcome the problematics of sedimentation, flushing has been considered as the most operative and cost-effective way to minimize the sediments impacts and prevent such challenges. Flushing, by prompting turbulent wave effects, can modify the bed form depending on the hydraulic properties and geometrical characteristics of the conduit. So far, the dynamics of the bed-load during high-flow events in combined sewer systems as a complex environment is not well understood, mostly due to lack of measuring devices capable to work in the “hostile” in combined sewer system correctly. In this regards, a one-episode flushing issue from an opening gate valve with weir function was carried out in a trunk sewer in Paris to understanding its cleansing efficiency on the sediments (thickness: 0-30 cm). During more than 1h of flushing within 5 m distance in downstream of this flushing device, a maximum flowrate and a maximum level of water have been recorded at 5 m in downstream of the gate as 4.1 m3/s and 2.1 m respectively. This paper is aimed to evaluate the efficiency of this type of gate for around 1.1 km (from the point -50 m to +1050 m in downstream from the gate) by (i) determining bed grain-size distribution and sediments evolution through the sewer channel, as well as their organic matter content, and (ii) identifying sections that exhibit more changes in their texture after the flush. For the first one, two series of sampling were taken from the sewer length and then analyzed in laboratory, one before flushing and second after, at same points among the sewer channel. Hence, a non-intrusive sampling instrument has undertaken to extract the sediments smaller than the fine gravels. The comparison between sediments texture after the flush operation and the initial state, revealed the most modified zones by the flush effect, regarding the sewer invert slope and hydraulic parameters in the zone up to 400 m from the gate. At this distance, despite the increase of sediment grain-size rages, D50 (median grain-size) varies between 0.6 mm and 1.1 mm compared to 0.8 mm and 10 mm before and after flushing, respectively. Overall, regarding the sewer channel invert slope, results indicate that grains smaller than sands (< 2 mm) are more transported to downstream along about 400 m from the gate: in average 69% before against 38% after the flush with more dispersion of grain-sizes distributions. Furthermore, high effect of the channel bed irregularities on the bed material evolution has been observed after the flush.Keywords: bed-load evolution, combined sewer systems, flushing efficiency, sediments transport
Procedia PDF Downloads 403295 Reflective Thinking and Experiential Learning – A Quasi-Experimental Quanti-Quali Response to Greater Diversification of Activities, Greater Integration of Student Profiles
Authors: Paulo Sérgio Ribeiro de Araújo Bogas
Abstract:
Although several studies have assumed (at least implicitly) that learners' approaches to learning develop into deeper approaches to higher education, there appears to be no clear theoretical basis for this assumption and no empirical evidence. As a scientific contribution to this discussion, a pedagogical intervention of a quasi-experimental nature was developed, with a mixed methodology, evaluating the intervention within a single curricular unit of Marketing, using cases based on real challenges of brands, business simulation, and customer projects. Primary and secondary experiences were incorporated in the intervention: the primary experiences are the experiential activities themselves; the secondary experiences result from the primary experience, such as reflection and discussion in work teams. A diversified learning relationship was encouraged through the various connections between the different members of the learning community. The present study concludes that in the same context, the student's responses can be described as students who reinforce the initial deep approach, students who maintain the initial deep approach level, and others who change from an emphasis on the deep approach to one closer to superficial. This typology did not always confirm studies reported in the literature, namely, whether the initial level of deep processing would influence the superficial and the opposite. The result of this investigation points to the inclusion of pedagogical and didactic activities that integrate different motivations and initial strategies, leading to the possible adoption of deep approaches to learning since it revealed statistically significant differences in the difference in the scores of the deep/superficial approach and the experiential level. In the case of real challenges, the categories of “attribution of meaning and meaning of studied” and the possibility of “contact with an aspirational context” for their future professional stand out. In this category, the dimensions of autonomy that will be required of them were also revealed when comparing the classroom context of real cases and the future professional context and the impact they may have on the world. Regarding the simulated practice, two categories of response stand out: on the one hand, the motivation associated with the possibility of measuring the results of the decisions taken, an awareness of oneself, and, on the other hand, the additional effort that this practice required for some of the students.Keywords: experiential learning, higher education, mixed methods, reflective learning, marketing
Procedia PDF Downloads 83294 Functional Neurocognitive Imaging (fNCI): A Diagnostic Tool for Assessing Concussion Neuromarker Abnormalities and Treating Post-Concussion Syndrome in Mild Traumatic Brain Injury Patients
Authors: Parker Murray, Marci Johnson, Tyson S. Burnham, Alina K. Fong, Mark D. Allen, Bruce McIff
Abstract:
Purpose: Pathological dysregulation of Neurovascular Coupling (NVC) caused by mild traumatic brain injury (mTBI) is the predominant source of chronic post-concussion syndrome (PCS) symptomology. fNCI has the ability to localize dysregulation in NVC by measuring blood-oxygen-level-dependent (BOLD) signaling during the performance of fMRI-adapted neuropsychological evaluations. With fNCI, 57 brain areas consistently affected by concussion were identified as PCS neural markers, which were validated on large samples of concussion patients and healthy controls. These neuromarkers provide the basis for a computation of PCS severity which is referred to as the Severity Index Score (SIS). The SIS has proven valuable in making pre-treatment decisions, monitoring treatment efficiency, and assessing long-term stability of outcomes. Methods and Materials: After being scanned while performing various cognitive tasks, 476 concussed patients received an SIS score based on the neural dysregulation of the 57 previously identified brain regions. These scans provide an objective measurement of attentional, subcortical, visual processing, language processing, and executive functioning abilities, which were used as biomarkers for post-concussive neural dysregulation. Initial SIS scores were used to develop individualized therapy incorporating cognitive, occupational, and neuromuscular modalities. These scores were also used to establish pre-treatment benchmarks and measure post-treatment improvement. Results: Changes in SIS were calculated in percent change from pre- to post-treatment. Patients showed a mean improvement of 76.5 percent (σ= 23.3), and 75.7 percent of patients showed at least 60 percent improvement. Longitudinal reassessment of 24 of the patients, measured an average of 7.6 months post-treatment, shows that SIS improvement is maintained and improved, with an average of 90.6 percent improvement from their original scan. Conclusions: fNCI provides a reliable measurement of NVC allowing for identification of concussion pathology. Additionally, fNCI derived SIS scores direct tailored therapy to restore NVC, subsequently resolving chronic PCS resulting from mTBI.Keywords: concussion, functional magnetic resonance imaging (fMRI), neurovascular coupling (NVC), post-concussion syndrome (PCS)
Procedia PDF Downloads 355293 Patient Satisfaction Measurement Using Face-Q for Non-Incisional Double-Eyelid Blepharoplasty with Modified Single-Knot Continuous Buried Suture Technique
Authors: Kwei Huan Liw, Sashi B. Darshan
Abstract:
Background: Double eyelid surgery has become one of the most sought-after aesthetic procedures among Asians. Many surgeons perform surgical blepharoplasty and various other methods of non-incisional blepharoplasty. Face-Q is a validated method of measuring patient satisfaction for facial aesthetic procedures. Here we have analyzed the overall eye satisfaction score, the upper eyelid appraisal score and the adverse effect on eyes score Methods: 274 patients (548 eyes), aged between 18 to 40 years old, were recruited from 2015-2018. Each patient underwent a non-incisional double-eyelid blepharoplasty using a single-knotted continuous buried suture. 3 – 5 stab incisions were made depending on the upper eyelid size. A needle loaded with 7-0 nylon is passed from the lateral most wound through the dermis and the conjunctiva in an alternate fashion into the remaining stab wounds. The suture is then tunneled back laterally in the deeper dermis and knotted securely with the suture end. The knot is then buried within the orbicularis oculi muscle. Each patient was required to fill the Face-Q questionnaire before the procedure and 2 weeks post procedure. The results are described based on the percentage of the maximum achievable score. Patients were reviewed after 12 to 18 months to assess the long-term outcome. Results: The overall eye satisfaction score demonstrated a high level of post-operative satisfaction (97.85%), compared to 27.32% pre-operatively. The appraisal of upper eyelid scores showed drastic improvement in perception post-operatively (95.31%) compared to 21.44% pre-operatively. Adverse effect on eyes score showed a very low post-operative complication rate (0.4%) The long-term follow-up showed 6 cases that had developed asymmetrical folds. Only 1 patient agreed for revision surgery. The other 5 patients were still satisfied with the outcome and were not keen for revision surgery. None of the cases had loosening of knots. Conclusion: Modified single-knot continuous buried suture technique is a simple and non-invasive method to create aesthetically pleasing non-surgical double-eyelids, which has long-term effects. Proper patient selection is crucial and good surgical technique is required to achieve a desirable outcome.Keywords: blepharoplasty, double-eyelid, face-Q, non-incisional
Procedia PDF Downloads 120292 Applying Sociometer Theory to Different Age Groups and Groups Differences regarding State Self-Esteem Sensitivity
Authors: Yun Yu Stephanie Law
Abstract:
Sociometer Theory is well tested among young adults in western population, however, limited research is found for other age groups, like adolescent and middle-adulthood in Asia population. Thus, one of the main purposes of this study is to verify the validity of Sociometer Theory in different age groups among Asian. To be specific, we hypothesized that an increase in one’s perceived social rejection is associated to a decrease in his/her state self-esteem among all age groups in Asian population. And we expected that this association can be found among all age groups including adolescent, young adults and middle-adults group in our first study. In this way, we can verify the validity of Sociometer Theory across different age groups as well as its significance in Asian population. Furthermore, those participants who received rejection about ‘mate-role’ would also receive some negative feedbacks regarding their current/future capacity of being a good mate. Results suggested that participants’ state self-esteem sensitivity for mating-capacity rejection is higher when comparing to that of friend-capacity rejection, i.e. greater drop in state self-esteem when receiving mating-capacity feedbacks then receiving friend-capacity feedbacks. These results, however, is just applicable on young adults. Thus, the main purpose of study two would be testing the state self-esteem sensitivity towards social rejection in different domains among three age groups. We hypothesized that group differences would be found for three age groups regarding state self-esteem sensitivity. Research question 1: perceived social rejection is associated to decrease in state self-esteem, is applicable among different age groups in Asia population. Research question 2: there are significant group differences for three age groups regarding state self-esteem sensitivity. Methods: 300 subjects are divided into three age groups, adolescents group, young adult group and middle-adult group, with 100 subjects in each group. Two questionnaires were used in testing this fundamental concept. Subjects were then asked to rate themselves on questionnaire in measuring their current state self-esteem in order to obtain the baseline measurements for later comparison. In order to avoid demand characteristics from subjects, other unrelated tasks like word matching were also given after the first test. Results: A positive correlation between scores in questionnaire 1 and questionnaire 2 among all age groups. Conclusion: State self-esteem decrease to both imagined social rejection (study1) and experienced social rejection (study2). Moreover, level of decrease in state self-esteem vary when receiving different domains of social rejection. Implications: a better understanding of self-esteem development for various age group might bring insights for education systems and policies for teaching approaches and learning methods among different age groups.Keywords: state self-esteem, social rejection, stage theory, self-feelings
Procedia PDF Downloads 230291 Collaboration with Governmental Stakeholders in Positioning Reputation on Value
Authors: Zeynep Genel
Abstract:
The concept of reputation in corporate development comes to the fore as one of the most frequently discussed topics in recent years. Many organizations, which make worldwide investments, make effort in order to adapt themselves to the topics within the scope of this concept and to promote the name of the organization through the values that might become prominent. The stakeholder groups are considered as the most important actors determining the reputation. Even, the effect of stakeholders is not evaluated as a direct factor; it is signed as indirect effects of their perception are a very strong on ultimate reputation. It is foreseen that the parallelism between the projected reputation and the perceived c reputation, which is established as a result of communication experiences perceived by the stakeholders, has an important effect on achieving these objectives. In assessing the efficiency of these efforts, the opinions of stakeholders are widely utilized. In other words, the projected reputation, in which the positive and/or negative reflections of corporate communication play effective role, is measured through how the stakeholders perceptively position the organization. From this perspective, it is thought that the interaction and cooperation of corporate communication professionals with different stakeholder groups during the reputation positioning efforts play significant role in achieving the targeted reputation or in sustainability of this value. The governmental stakeholders having intense communication with mass stakeholder groups are within the most effective stakeholder groups of organization. The most important reason of this is that the organizations, regarding which the governmental stakeholders have positive perception, inspire more confidence to the mass stakeholders. At this point, the organizations carrying out joint projects with governmental stakeholders in parallel with sustainable communication approach come to the fore as the organizations having strong reputation, whereas the reputation of organizations, which fall behind in this regard or which cannot establish the efficiency from this aspect, is thought to be perceived as weak. Similarly, the social responsibility campaigns, in which the governmental stakeholders are involved and which play efficient role in strengthening the reputation, are thought to draw more attention. From this perspective, the role and effect of governmental stakeholders on the reputation positioning is discussed in this study. In parallel with this objective, it is aimed to reveal perspectives of seven governmental stakeholders towards the cooperation in reputation positioning. The sample group representing the governmental stakeholders is examined under the lights of results obtained from in-depth interviews with the executives of different ministries. It is asserted that this study, which aims to express the importance of stakeholder participation in corporate reputation positioning especially in Turkey and the effective role of governmental stakeholders in strong reputation, might provide a new perspective on measuring the corporate reputation, as well as establishing an important source to contribute to the studies in both academic and practical domains.Keywords: collaborative communications, reputation management, stakeholder engagement, ultimate reputation
Procedia PDF Downloads 225290 Laser Paint Stripping on Large Zones on AA 2024 Based Substrates
Authors: Selen Unaldi, Emmanuel Richaud, Matthieu Gervais, Laurent Berthe
Abstract:
Aircrafts are painted with several layers to guarantee their protection from external attacks. For aluminum AA 2024-T3 (metallic structural part of the plane), a protective primer is applied to ensure its corrosion protection. On top of this layer, the top coat is applied for aesthetic aspects. During the lifetime of an aircraft, top coat stripping has an essential role which should be operated as an average of every four years. However, since conventional stripping processes create hazardous disposals and need long hours of labor work, alternative methods have been investigated. Amongst them, laser stripping appears as one of the most promising techniques not only because of the reasons mentioned above but also its controllable and monitorable aspects. The application of a laser beam from the coated side provides stripping, but the depth of the process should be well controlled in order to prevent damage to a substrate and the anticorrosion primer. Apart from that, thermal effects should be taken into account on the painted layers. As an alternative, we worked on developing a process that includes the usage of shock wave propagation to create the stripping via mechanical effects with the application of the beam from the substrate side (back face) of the samples. Laser stripping was applied on thickness-specified samples with a thickness deviation of 10-20%. First, the stripping threshold is determined as a function of power density which is the first flight off of the top coats. After obtaining threshold values, the same power densities were applied to specimens to create large stripping zones with a spot overlap of 10-40%. Layer characteristics were determined on specimens in terms of physicochemical properties and thickness range both before and after laser stripping in order to validate the substrate material health and coating properties. The substrate health is monitored by measuring the roughness of the laser-impacted zones and free surface energy tests (both before and after laser stripping). Also, Hugoniot Elastic Limit (HEL) is determined from VISAR diagnostic on AA 2024-T3 substrates (for the back face surface deformations). In addition, the coating properties are investigated as a function of adhesion levels and anticorrosion properties (neutral salt spray test). The influence of polyurethane top-coat thickness is studied in order to verify the laser stripping process window for industrial aircraft applications.Keywords: aircraft coatings, laser stripping, laser adhesion tests, epoxy, polyurethane
Procedia PDF Downloads 78289 Human’s Sensitive Reactions during Different Geomagnetic Activity: An Experimental Study in Natural and Simulated Conditions
Authors: Ketevan Janashia, Tamar Tsibadze, Levan Tvildiani, Nikoloz Invia, Elguja Kubaneishvili, Vasili Kukhianidze, George Ramishvili
Abstract:
This study considers the possible effects of geomagnetic activity (GMA) on humans situated on Earth by performing experiments concerning specific sensitive reactions in humans in both: natural conditions during different GMA and by the simulation of different GMA in the lab. The measurements of autonomic nervous system (ANS) responses to different GMA via measuring the heart rate variability (HRV) indices and stress index (SI) and their comparison with the K-index of GMA have been presented and discussed. The results of experiments indicate an intensification of the sympathetic part of the ANS as a stress reaction of the human organism when it is exposed to high level of GMA as natural as well as in simulated conditions. Aim: We tested the hypothesis whether the GMF when disturbed can have effects on human ANS causing specific sensitive stress-reactions depending on the initial type of ANS. Methods: The study focuses on the effects of different GMA on ANS by comparing of HRV indices and stress index (SI) of n= 78, 18-24 years old healthy male volunteers. Experiments were performed as natural conditions on days of low (K= 1-3) and high (K= 5-7) GMA as well as in the lab by the simulation of different GMA using the device of geomagnetic storm (GMS) compensation and simulation. Results: In comparison with days of low GMA (K=1-3) the initial values of HRV shifted towards the intensification of the sympathetic part (SP) of the ANS during days of GMSs (K=5-7) with statistical significance p-values: HR (heart rate, p= 0.001), SDNN (Standard deviation of all Normal to Normal intervals, p= 0.0001), RMSSD (The square root of the arithmetical mean of the sum of the squares of differences between adjacent NN intervals, p= 0.0001). In comparison with conditions during GMSs compensation mode (K= 0, B= 0-5nT), the ANS balance was observed to shift during exposure to simulated GMSs with intensities in the range of natural GMSs (K= 7, B= 200nT). However, the initial values of the ANS resulted in different dynamics in its variation depending of GMA level. In the case of initial balanced regulation type (HR > 80) significant intensification of SP was observed with p-values: HR (p= 0.0001), SDNN (p= 0.047), RMSSD (p= 0.28), LF/HF (p=0.03), SI (p= 0.02); while in the case of initial parasympathetic regulation type (HR < 80), an insignificant shift to the intensification of the parasympathetic part (PP) was observed. Conclusions: The results indicate an intensification of SP as a stress reaction of the human organism when it is exposed to high level of GMA in both natural and simulated conditions.Keywords: autonomic nervous system, device of magneto compensation/simulation, geomagnetic storms, heart rate variability
Procedia PDF Downloads 141288 Finite Element Analysis of Human Tarsals, Meta Tarsals and Phalanges for Predicting probable location of Fractures
Authors: Irfan Anjum Manarvi, Fawzi Aljassir
Abstract:
Human bones have been a keen area of research over a long time in the field of biomechanical engineering. Medical professionals, as well as engineering academics and researchers, have investigated various bones by using medical, mechanical, and materials approaches to discover the available body of knowledge. Their major focus has been to establish properties of these and ultimately develop processes and tools either to prevent fracture or recover its damage. Literature shows that mechanical professionals conducted a variety of tests for hardness, deformation, and strain field measurement to arrive at their findings. However, they considered these results accuracy to be insufficient due to various limitations of tools, test equipment, difficulties in the availability of human bones. They proposed the need for further studies to first overcome inaccuracies in measurement methods, testing machines, and experimental errors and then carry out experimental or theoretical studies. Finite Element analysis is a technique which was developed for the aerospace industry due to the complexity of design and materials. But over a period of time, it has found its applications in many other industries due to accuracy and flexibility in selection of materials and types of loading that could be theoretically applied to an object under study. In the past few decades, the field of biomechanical engineering has also started to see its applicability. However, the work done in the area of Tarsals, metatarsals and phalanges using this technique is very limited. Therefore, present research has been focused on using this technique for analysis of these critical bones of the human body. This technique requires a 3-dimensional geometric computer model of the object to be analyzed. In the present research, a 3d laser scanner was used for accurate geometric scans of individual tarsals, metatarsals, and phalanges from a typical human foot to make these computer geometric models. These were then imported into a Finite Element Analysis software and a length refining process was carried out prior to analysis to ensure the computer models were true representatives of actual bone. This was followed by analysis of each bone individually. A number of constraints and load conditions were applied to observe the stress and strain distributions in these bones under the conditions of compression and tensile loads or their combination. Results were collected for deformations in various axis, and stress and strain distributions were observed to identify critical locations where fracture could occur. A comparative analysis of failure properties of all the three types of bones was carried out to establish which of these could fail earlier which is presented in this research. Results of this investigation could be used for further experimental studies by the academics and researchers, as well as industrial engineers, for development of various foot protection devices or tools for surgical operations and recovery treatment of these bones. Researchers could build up on these models to carryout analysis of a complete human foot through Finite Element analysis under various loading conditions such as walking, marching, running, and landing after a jump etc.Keywords: tarsals, metatarsals, phalanges, 3D scanning, finite element analysis
Procedia PDF Downloads 329287 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 59286 Defining and Measuring the Success of the Hospitality-Based Social Enterprise Ringelblum Café
Authors: Nitzan Winograd, Nada Kakabadse
Abstract:
This study examines whether the hospitality-based social enterprise Ringelblum Café is achieving its stated social goals of developing a sense of self-efficacy among at-risk youth who work in this enterprise and raising levels of recruitment to the Israel Defence Forces (IDF) and National Service (NS) among these young adults. Ringelblum Café was founded in 2009 in Be'er-Sheva in order to provide employment solutions for at-risk youth in the southern district of Israel. Each year, 10 at-risk young adults aged 16–18 are referred to the programme by various welfare agencies. The training programme is approximately a year in duration and includes professional training in the art of cooking. Each young adult is also supported by a social worker. This study is based on the participation of 31 youths who graduated from the Ringelblum Café’s training programme. A convenience sampling model was used with the assistance of the programme's social worker. This study is quantitative in its approach. Data was collected by means of three separate self-reported questionnaires: a personal information questionnaire collected general demographics data; a self-efficacy questionnaire consisted of two parts: general self-efficacy and social self-efficacy; and an IDS/NS recruitment questionnaire. The study uses the theory of change in order to find out whether at-risk youth in the Ringelblum Café programme are taught a profession with future prospects, as well as whether they develop a sense of self-efficacy and raise their chances of recruitment into the IDF/NS. The study found that the sense of self-efficacy of the graduates is relatively high. In addition, there was a significant difference between the importance of recruitment to the IDF/NS among these youth prior to the beginning of the programme and after its completion, indicating that the training programme had a positive effect on motivation for recruitment to the IDF/NS. The study also found that the percentage of recruits to the IDF/NS among youth who graduated from the training programme were not significantly higher than the general recruitment figures in Israel. In conclusion, Ringelblum Café is making sound progress towards achieving its social goals regarding recruitment to the IDF/NS. Moreover, the sense of self-efficacy among the graduates is relatively high, and it can be assumed that the training programme has a positive effect on these young adults, although there is no clear connection between the two. This study is among a few that have been conducted in the field of hospitality-based social enterprises in Israel and can serve as a basis for further research. Moreover, the study results may help improve the perception of at-risk youth and their contribution to society and could increase awareness of the growing trend of social enterprises promoting social goals.Keywords: at-risk youth, Israel Defence Forces (IDF), national service, recruitment, self-efficacy, social enterprise
Procedia PDF Downloads 215