Search results for: degree of operating leverage (DOL)
4660 Efficacy of TiO₂ in the Removal of an Acid Dye by Photo Catalytic Degradation
Authors: Laila Mahtout, Kerami Ahmed, Rabhi Souhila
Abstract:
The objective of this work is to reduce the impact on the environment of an acid dye (Black Eriochrome T) using catalytic photo-degradation in the presence of the semiconductor powder (TiO₂) previously characterized. A series of tests have been carried out in order to demonstrate the influence of certain parameters on the degree of dye degradation by titanium dioxide in the presence of UV rays, such as contact time, the powder mass and the pH of the solution. X-ray diffraction analysis of the powder showed that the anatase structure is predominant and the rutile phase is presented by peaks of low intensity. The various chemical groups which characterize the presence of the bands corresponding to the anatase and rutile form and other chemical functions have been detected by the Fourier Transform Infrared spectroscopy. The photo degradation of the NET by TiO₂ is very interesting because it gives encouraging results. The study of photo-degradation at different concentrations of the dye showed that the lower concentrations give better removal rates. The degree of degradation of the dye increases with increasing pH; it reaches the maximum value at pH = 9. The ideal mass of TiO₂ which gives the high removal rate is 1.2 g/l. Thermal treatment of TiO₂ with the addition of CuO with contents of 5%, 10%, and 15% respectively gives better results of degradation of the NET dye. The high percentage of elimination is observed at a CuO content of 15%.Keywords: acid dye, ultraviolet rays, degradation, photocatalyse
Procedia PDF Downloads 1944659 Thermal Performance of Dual Flame Impinging Normally on to a Flat Surface
Authors: Satpal Singh, Subhash Chander
Abstract:
An experimental study has been conducted to evaluate the thermal performance of the CNG/air dual flame impinging normally on to a flat surface. The stability limits for the dual flame under both impinging and free conditions have been evaluated to select experimental operating range. Dual flame shape and structure have been explained with direct flame image and schematic diagram indicating modification in recirculation zone in presence of inner flame. Effects of various operating parameters like H/Dh, Re(o), Φ(o), and θ(o) on heat transfer characteristics have been discussed. Inner non-swirling flame Reynolds number (Re(i)) and equivalence ratio (Φ(i)) were kept constant. Heating patterns in the impingement region around the stagnation point have been altered significantly with change in the values of H/Dh, Re(o), Φ(o), and θ(o). The axial flow of inner flame has been notably effected with increase in Re(o). Heating was most favorable near stoichiometeric conditions of the outer swirling flame. However, the effect of change in swirl intensity (expressed in terms of θ(o)) on overall heat transfer efficiency was not as significant as in the case of other parameters. It has been inferred that best performance (higher uniformity and efficiency) of the dual flame impinging on a flat surface can be achieved at moderate value of separation distance (H/Dh of 2-3) and outer swirling flame Reynolds number (Re(o) of 7000-9000) under stoichiometeric conditions.Keywords: dual flame, heat transfer, impingement, swirling insert, transmission efficiency
Procedia PDF Downloads 2984658 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 1144657 Developing Dynamic Capabilities: The Case of Western Subsidiaries in Emerging Market
Authors: O. A. Adeyemi, M. O. Idris, W. A. Oke, O. T. Olorode, S. O. Alayande, A. E. Adeoye
Abstract:
The purpose of this paper is to investigate the process of capability building at subsidiary level and the challenges to such process. The relevance of external factors for capability development, have not been explicitly addressed in empirical studies. Though, internal factors, acting as enablers, have been more extensively studied. With reference to external factors, subsidiaries are actively influenced by specific characteristics of the host country, implying a need to become fully immersed in local culture and practices. Specifically, in MNCs, there has been a widespread trend in management practice to increase subsidiary autonomy, with subsidiary managers being encouraged to act entrepreneurially, and to take advantage of host country specificity. As such, it could be proposed that: P1: The degree at which subsidiary management is connected to the host country, will positively influence the capability development process. Dynamic capabilities reside to a large measure with the subsidiary management team, but are impacted by the organizational processes, systems and structures that the MNC headquarter has designed to manage its business. At the subsidiary level, the weight of the subsidiary in the network, its initiative-taking and its profile building increase the supportive attention of the HQs and are relevant to the success of the process of capability building. Therefore, our second proposition is that: P2: Subsidiary role and HQ support are relevant elements in capability development at the subsidiary level. Design/Methodology/Approach: This present study will adopt the multiple case studies approach. That is because a case study research is relevant when addressing issues without known empirical evidences or with little developed prior theory. The key definitions and literature sources directly connected with operations of western subsidiaries in emerging markets, such as China, are well established. A qualitative approach, i.e., case studies of three western subsidiaries, will be adopted. The companies have similar products, they have operations in China, and both of them are mature in their internationalization process. Interviews with key informants, annual reports, press releases, media materials, presentation material to customers and stakeholders, and other company documents will be used as data sources. Findings: Western Subsidiaries in Emerging Market operate in a way substantially different from those in the West. What are the conditions initiating the outsourcing of operations? The paper will discuss and present two relevant propositions guiding that process. Practical Implications: MNCs headquarter should be aware of the potential for capability development at the subsidiary level. This increased awareness could induce consideration in headquarter about the possible ways of encouraging such known capability development and how to leverage these capabilities for better MNC headquarter and/or subsidiary performance. Originality/Value: The paper is expected to contribute on the theme: drivers of subsidiary performance with focus on emerging market. In particular, it will show how some external conditions could promote a capability-building process within subsidiaries.Keywords: case studies, dynamic capability, emerging market, subsidiary
Procedia PDF Downloads 1224656 Development of Doctoral Education in Armenia (1990 - 2023)
Authors: Atom Mkhitaryan, Astghik Avetisyan
Abstract:
We analyze the developments of doctoral education in Armenia since 1990 and the management process. Education and training of highly qualified personnel are increasingly seen as a fundamental platform that ensures the development of the state. Reforming the national institute for doctoral studies (aspirantura) is aimed at improving the quality of human resources in science, optimizing research topics in accordance with the priority areas of development of science and technology, increasing publication and innovative activities, bringing national science and research closer to the world level and achieving international recognition. We present a number of defended dissertations in Armenia during the last 30 years, the dynamics and the main trends of the development of the academic degree awarding system. We discuss the possible impact of reforming the system of training and certification of highly qualified personnel on the organization of third–level doctoral education (doctoral schools) and specialized / dissertation councils in Armenia. The results of the SWOT analysis of doctoral education and academic degree awarding processes in Armenia are shown. The article presents the main activities and projects aimed at using the advantages and strong points of the National Academy network in order to improve the quality of doctoral education and training. The paper explores the mechanisms of organizational, methodological and infrastructural support for research and innovation activities of doctoral students and young scientists. There are also suggested approaches to the organization of strong networking between research institutes and foreign universities for training and certification of highly qualified personnel. The authors define the role of ISEC in the management of doctoral studies and the establishment of a competitive third-level education for the sphere of research and development in Armenia.Keywords: doctoral studies, academic degree, PhD, certification, highly qualified personnel, dissertation, research and development, innovation, networking, management of doctoral school
Procedia PDF Downloads 634655 Democratic Information Behavior of Social Scientists and Policy Makers in India
Authors: Mallikarjun Vaddenkeri, Suresh Jange
Abstract:
This research study reports results of information behaviour by members of faculty and research scholars of various departments of social sciences working at universities with a sample of 300 and Members of Legislative Assembly and Council with 216 samples in Karnataka State, India. The results reveal that 29.3% and 20.3% of Social Scientists indicated medium and high level of awareness of primary sources - Primary Journals are found to be at scale level 5 and 9. The usage of primary journals by social scientists is found to be 28% at level 4, 24% of the respondent’s opined use of primary Conference Proceedings at level 5 as medium level of use. Similarly, the use of Secondary Information Sources at scale 8 and 9 particularly in case of Dictionaries (31.0% and 5.0%), Encyclopaedias (22.3% and 6.3%), Indexing Periodicals (7.0% and 15.3%) and Abstracting Periodicals (5.7% and 20.7%). For searching information from Journals Literature available in CD-ROM version, Keywords (43.7%) followed by Keywords with logical operators (39.7%) have been used for finding the required information. Statistical inference reveals rejection of null hypothesis `there is no association between designation of the respondents and awareness of primary information resources’. On the other hand, educational qualification possessed by Legislative members, more than half of them possess graduate degree as their academic qualification (57.4%) and just 16.7% of the respondents possess graduate degree while only 26.8% of the respondents possess degree in law and just 1.8% possess post-graduate degree in law. About 42.6% indicated the importance of information required to discharge their duties and responsibilities as a Policy Maker in the scale 8, as a Scholar (27.8%) on a scale 6, as a politician (64.8%) on a scale 10 and as a Councillor (51.9%) on a scale 8. The most preferred information agencies/sources very often contacted for obtaining useful information are by means of contacting the people of Karnataka State Legislative Library, listening Radio programmes, viewing Television programmes and reading the newspapers. The methods adopted for obtaining needed information quite often by means of sending their assistants to libraries to gather information (35.2%) and personally visiting the information source (64.8%). The null hypotheses `There is no association between Members of Legislature and Opinion on the usefulness of the resources of the Karnataka State Legislature Library’ is accepted using F ANOVA test. The studies conclude with a note revamp the existing library system in its structure and adopt latest technologies and educate and train social scientists and Legislators in using these resources in the interest of academic, government policies and decision making of the country.Keywords: information use behaviour, government information, searching behaviour, policy makers
Procedia PDF Downloads 1394654 Thermoelectric Cooler As A Heat Transfer Device For Thermal Conductivity Test
Authors: Abdul Murad Zainal Abidin, Azahar Mohd, Nor Idayu Arifin, Siti Nor Azila Khalid, Mohd Julzaha Zahari Mohamad Yusof
Abstract:
A thermoelectric cooler (TEC) is an electronic component that uses ‘peltier’ effect to create a temperature difference by transferring heat between two electrical junctions of two different types of materials. TEC can also be used for heating by reversing the electric current flow and even power generation. A heat flow meter (HFM) is an equipment for measuring thermal conductivity of building materials. During the test, water is used as heat transfer medium to cool the HFM. The existing re-circulating cooler in the market is very costly, and the alternative is to use piped tap water to extract heat from HFM. However, the tap water temperature is insufficiently low to enable heat transfer to take place. The operating temperature for isothermal plates in the HFM is 40°C with the range of ±0.02°C. When the temperature exceeds the operating range, the HFM stops working, and the test cannot be conducted. The aim of the research is to develop a low-cost but energy-efficient TEC prototype that enables heat transfer without compromising the function of the HFM. The objectives of the research are a) to identify potential of TEC as a cooling device by evaluating its cooling rate and b) to determine the amount of water savings using TEC compared to normal tap water. Four (4) peltier sets were used, with two (2) sets used as pre-cooler. The cooling water is re-circulated from the reservoir into HFM using a water pump. The thermal conductivity readings, the water flow rate, and the power consumption were measured while the HFM was operating. The measured data has shown decrease in average cooling temperature difference (ΔTave) of 2.42°C and average cooling rate of 0.031°C/min. The water savings accrued from using the TEC is projected to be 8,332.8 litres/year with the application of water re-circulation. The results suggest the prototype has achieved required objectives. Further research will include comparing the cooling rate of TEC prototype against conventional tap water and to optimize its design and performance in terms of size and portability. The possible application of the prototype could also be expanded to portable storage for medicine and beverages.Keywords: energy efficiency, thermoelectric cooling, pre-cooling device, heat flow meter, sustainable technology, thermal conductivity
Procedia PDF Downloads 1554653 The Use of Bleomycin and Analogues to Probe the Chromatin Structure of Human Genes
Authors: Vincent Murray
Abstract:
The chromatin structure at the transcription start sites (TSSs) of genes is very important in the control of gene expression. In order for gene expression to occur, the chromatin structure at the TSS has to be altered so that the transcriptional machinery can be assembled and RNA transcripts can be produced. In particular, the nucleosome structure and positioning around the TSS has to be changed. Bleomycin is utilized as an anti-tumor agent to treat Hodgkin's lymphoma, squamous cell carcinoma, and testicular cancer. Bleomycin produces DNA damage in human cells and DNA strand breaks, especially double-strand breaks, are thought to be responsible for the cancer chemotherapeutic activity of bleomycin. Bleomycin is a large glycopeptide with molecular weight of approximately 1500 Daltons and hence its DNA strand cleavage activity can be utilized as a probe of chromatin structure. In this project, Illumina next-generation DNA sequencing technology was used to determine the position of DNA double-strand breaks at the TSSs of genes in intact cells. In this genome-wide study, it was found that bleomycin cleavage preferentially occurred at the TSSs of actively transcribed human genes in comparison with non-transcribed genes. There was a correlation between the level of enhanced bleomycin cleavage at TSSs and the degree of transcriptional activity. In addition, bleomycin was able to determine the position of nucleosomes at the TSSs of human genes. Bleomycin analogues were also utilized as probes of chromatin structure at the TSSs of human genes. In a similar manner to bleomycin, the bleomycin analogues 6′-deoxy-BLM Z and zorbamycin preferentially cleaved at the TSSs of human genes. Interestingly this degree of enhanced TSS cleavage inversely correlated with the cytotoxicity (IC50 values) of BLM analogues. This indicated that the degree of cleavage by bleomycin analogues at the TSSs of human genes was very important in the cytotoxicity of bleomycin and analogues. It also provided a deeper insight into the mechanism of action of this cancer chemotherapeutic agent since actively transcribed genes were preferentially targeted.Keywords: anti-cancer activity, chromatin structure, cytotoxicity, gene expression, next-generation DNA sequencing
Procedia PDF Downloads 1164652 Microwave-Assisted Torrefaction of Teakwood Biomass Residues: The Effect of Power Level and Fluid Flows
Authors: Lukas Kano Mangalla, Raden Rinova Sisworo, Luther Pagiling
Abstract:
Torrefaction is an emerging thermo-chemical treatment process that aims to improve the quality of biomass fuels. This study focused on upgrading the waste teakwood through microwave torrefaction processes and investigating the key operating parameters to improve energy density for the quality of biochar production. The experiments were carried out in a 250 mL reactor placed in a microwave cavity on two different media, inert and non-inert. The microwave was operated at a frequency of 2.45GHz with power level variations of 540W, 720W, and 900W, respectively. During torrefaction processes, the nitrogen gas flows into the reactor at a rate of 0.125 mL/min, and the air flows naturally. The temperature inside the reactor was observed every 0.5 minutes for 20 minutes using a K-Type thermocouple. Changes in the mass and the properties of the torrefied products were analyzed to predict the correlation between calorific value, mass yield, and level power of the microwave. The results showed that with the increase in the operating power of microwave torrefaction, the calorific value and energy density of the product increased significantly, while mass and energy yield tended to decrease. Air can be a great potential media for substituting the expensive nitrogen to perform the microwave torrefaction for teakwood biomass.Keywords: torrefaction, microwave heating, energy enhancement, mass and energy yield
Procedia PDF Downloads 924651 Hemostasis Poly Vinyl Alcohol Gauze Coated with Chitosan Encapsulated with Polymer and Drug
Authors: Abhishekkumar Ramasamy, Parameshwari
Abstract:
Chitosan is the deacyelitated derivative of chitin, the second most abundant biopolymer just after cellulose. Without doubt, its biomedical usages have gained more importance among the vast variety of chitosan applications owing to its good biocompatibility and biodegradability. In recent years, particular interest has been devoted to chitosan hydrogels as a promising alternative in competition with conventional sutures or bioadhesives. Different parameters such as acid type and concentration, and degree of deacetylation (DD%) of chitosan, were altered to modify hydrogel properties including viscosity, pH, cohesive strength, and tissue bioadhesiveness. In the current work, we have investigated the effectiveness of chitosan hydrogel encapsulated with tanexamic acid to stop bleeding. Chitosan film was obtained with solubilization of chitosan powder in aqueous acidic media. In vivo experiments have been conducted on rat and rabbit models that provide a convenient way to evaluate the efficacy of prepared samples. The arteries vein was punctured on the hind limb of the rat and the gauze was been applied on the punchered area. Bioadhesive strength as well as irritant effects were discussed. Samples with higher degree of deacetylation, including Chs-16 and Chs-19 that were dissolved in lactic media showed best sealing effect.Keywords: chitosan, biocomaptibility, biodegradability, bioadhersive, deacetylation
Procedia PDF Downloads 3494650 Structural Vulnerability of Banking Network – Systemic Risk Approach
Authors: Farhad Reyazat, Richard Werner
Abstract:
This paper contributes to the existent literature by developing a framework that explains how to monitor potential threats to banking sector stability. The study explores structural vulnerabilities at the country level, but also look at bilateral exposures within a network context. The study contributes in analysing of the European banking systemic risk at aggregated level, which integrates the characteristics of bank size, and interconnectedness relative to the size of the economy which ultimate risk belong to, taking to account the concentration ratio of the banking industry within the whole economy. The nature of the systemic risk depends on the interplay of the network topology with the nature of financial transactions over the network, assets and buffer stemming from bank size, correlations, and the nature of the shocks to the financial system. The study’s results illustrate the contribution of banks’ size, size of economy and concentration of counterparty exposures to a given country’s banks in explaining its systemic importance, how much the banking network depends on a few traditional hubs activities and the changes of this dependencies over the last 9 years. The role of few of traditional hubs such as Swiss banks and British Banks and also Irish banks- where the financial sector is fairly new and grew strongly between 1990s till 2008- take the fourth position on 2014 reducing the relative size since 2006 where they had the first position. In-degree concentration index analysis in the study shows concentration index of banking network was not changed since financial crisis 2007-8. In-degree concentration index on first quarter of 2014 indicates that US, UK and Germany together, getting over 70% of the network exposures. The result of comparing the in-degree concentration index with 2007-4Q, shows the same group having over 70% of the network exposure, however the UK getting more important role in the hub and the market share of US and Germany are slightly diminished.Keywords: systemic risk, counterparty risk, financial stability, interconnectedness, banking concentration, european banks risk, network effect on systemic risk, concentration risk
Procedia PDF Downloads 4904649 Political Connections, Business Strategy and Tax Aggressiveness: Evidence from China
Authors: Liqiang Chen
Abstract:
This study investigates the effects of political connections on the association between firms’ business strategy and their tax aggressiveness in an emerging economy such as China. By studying all public Chinese firms in the period from 2011 to 2017, we find that firms adopting innovative business strategy are more tax aggressive overall, but innovative firms with political connections are less tax aggressive compared to those without political connections. Moreover, we document several channels through which political connections affect the association between innovative business strategy and tax aggressiveness. In particular, we show that the mitigation effect of political connections on tax aggressiveness is stronger for innovative firms located in areas with a lower marketization index and for innovative firms with a lower leverage level or with less earnings management. Our results are robust to an instrumental variable approach to account for possible endogenous bias. Our study contributes to the understanding of firms’ tax behaviors in an emerging economy setting and suggests that there are costs associated with political connections, such as foregone tax saving opportunities, which are understudied in the prior literature.Keywords: tax aggressiveness, business strategy, political connections, emerging economy
Procedia PDF Downloads 1244648 UF as Pretreatment of RO for Tertiary Treatment of Biologically Treated Distillery Spentwash
Authors: Pinki Sharma, Himanshu Joshi
Abstract:
Distillery spentwash contains high chemical oxygen demand (COD), biological oxygen demand (BOD), color, total dissolved solids (TDS) and other contaminants even after biological treatment. The effluent can’t be discharged as such in the surface water bodies or land without further treatment. Reverse osmosis (RO) treatment plants have been installed in many of the distilleries at tertiary level. But at most of the places these plants are not properly working due to high concentration of organic matter and other contaminants in biologically treated spentwash. To make the membrane treatment proven and reliable technology, proper pre-treatment is mandatory. In the present study, ultra-filtration (UF) as pre-treatment of RO at tertiary stage was performed. Operating parameters namely initial pH (pHo: 2–10), trans-membrane pressure (TMP: 4-20 bars) and temperature (T: 15- 43°C) used for conducting experiments with UF system. Experiments were optimized at different operating parameters in terms of COD, color, TDS and TOC removal by using response surface methodology (RSM) with central composite design. The results showed that removal of COD, color and TDS by 62%, 93.5% and 75.5%, with UF, respectively at optimized conditions with increased permeate flux from 17.5 l/m2/h (RO) to 38 l/m2/h (UF-RO). The performance of the RO system was greatly improved both in term of pollutant removal as well as water recovery.Keywords: bio-digested distillery spentwash, reverse osmosis, response surface methodology, ultra-filtration
Procedia PDF Downloads 3474647 A Proposed Model of E-Marketing Service-Oriented Architecture (E-MSOA)
Authors: Hussein Moselhy, Islam Salam
Abstract:
There have been some challenges and problems which hinder the implementation of the e-marketing systems such as the high cost of information systems infrastructure and maintenance as well as their unavailability within the institution. Also, there is no system which supports all programming languages and different platforms. Another problem is the lack of integration between these systems on one hand and the operating systems and different web browsers on the other hand. No system for customer relationship management is established which recognizes their desires and puts them in consideration while performing e-marketing functions is available. Therefore, the service-oriented architecture emerged as one of the most important techniques and methodologies to build systems that integrate with various operating systems and different platforms and other technologies. This technology allows realizing the data exchange among different applications. The service-oriented architecture represents distributed computing concepts to demonstrate its success in achieving the requirements of systems through web services. It also reflects the appropriate design for the services to use different web services in supporting the requirements of business processes and software users. In a service-oriented environment, web services are deployed on the web in the form of independent services to be accessed without knowledge of the nature of the programs and systems with in. This Paper presents a proposal for a new model which contributes to the application of methods and means of e-marketing with the integration of marketing mix elements to improve marketing efficiency (E-MSOA). And apply it in the educational city of one of the Egyptian sector.Keywords: service-oriented architecture, electronic commerce, virtual retailing, unified modeling language
Procedia PDF Downloads 4284646 Development of Electroencephalograph Collection System in Language-Learning Self-Study System That Can Detect Learning State of the Learner
Authors: Katsuyuki Umezawa, Makoto Nakazawa, Manabu Kobayashi, Yutaka Ishii, Michiko Nakano, Shigeichi Hirasawa
Abstract:
This research aims to develop a self-study system equipped with an artificial teacher who gives advice to students by detecting the learners and to evaluate language learning in a unified framework. 'Detecting the learners' means that the system understands the learners' learning conditions, such as each learner’s degree of understanding, the difference in each learner’s thinking process, the degree of concentration or boredom in learning, and problem solving for each learner, which can be interpreted from learning behavior. In this paper, we propose a system to efficiently collect brain waves from learners by focusing on only the brain waves among the biological information for 'detecting the learners'. The conventional Electroencephalograph (EEG) measurement method during learning using a simple EEG has the following disadvantages. (1) The start and end of EEG measurement must be done manually by the experiment participant or staff. (2) Even when the EEG signal is weak, it may not be noticed, and the data may not be obtained. (3) Since the acquired EEG data is stored in each PC, there is a possibility that the time of data acquisition will be different in each PC. This time, we developed a system to collect brain wave data on the server side. This system overcame the above disadvantages.Keywords: artificial teacher, e-learning, self-study system, simple EEG
Procedia PDF Downloads 1434645 Maximum Deformation Estimation for Reinforced Concrete Buildings Using Equivalent Linearization Method
Authors: Chien-Kuo Chiu
Abstract:
In the displacement-based seismic design and evaluation, equivalent linearization method is one of the approximation methods to estimate the maximum inelastic displacement response of a system. In this study, the accuracy of two equivalent linearization methods are investigated. The investigation consists of three soil condition in Taiwan (Taipei Basin 1, 2, and 3) and five different heights of building (H_r= 10, 20, 30, 40, and 50 m). The first method is the Taiwan equivalent linearization method (TELM) which was proposed based on Japanese equivalent linear method considering the modification factor, α_T= 0.85. On the basis of Lin and Miranda study, the second method is proposed with some modification considering Taiwan soil conditions. From this study, it is shown that Taiwanese equivalent linearization method gives better estimation compared to the modified Lin and Miranda method (MLM). The error index for the Taiwanese equivalent linearization method are 16%, 13%, and 12% for Taipei Basin 1, 2, and 3, respectively. Furthermore, a ductility demand spectrum of single-degree-of-freedom (SDOF) system is presented in this study as a guide for engineers to estimate the ductility demand of a structure.Keywords: displacement-based design, ductility demand spectrum, equivalent linearization method, RC buildings, single-degree-of-freedom
Procedia PDF Downloads 1624644 Fund Seekers’ Deception in Peer-to-Peer Lending in Times of COVID
Authors: Olivier Mesly
Abstract:
This article examines the likelihood of deception on the part of borrowers wishing to obtain credit from institutional or private lenders. In our first study, we identify five explanatory variables that account for nearly forty percent of the propensity to act deceitfully: a poor credit history, debt, risky behavior, and to a much lesser degree, irrational behavior and disconnection from the bundle of needs, goals, and preferences. For the second study, we remodeled the initial questionnaire to adapt it to the needs of institutional bankers and borrowers, especially those that engage in money on-line peer-to-peer lending, a growing business fueled by the COVID pandemic. We find that the three key psychological variables that help to indirectly predict the likelihood of deceitful behaviors and possible default on loan reimbursement, i.e., risky behaviors, ir-rationality, and dis-connection, interact with each other to form a loop. This study presents two benefits: first, we provide evidence that it is to some degree possible to tighten control over lending practices. Second, we offer a pragmatic tool: a questionnaire, that lenders can use or adapt to gauge potential borrowers’ deceit, notably by combining their results with standard hard-data measures of risk.Keywords: bundle of needs, default, debt, deception, risk, peer-to-peer lending
Procedia PDF Downloads 1324643 Model Based Fault Diagnostic Approach for Limit Switches
Authors: Zafar Mahmood, Surayya Naz, Nazir Shah Khattak
Abstract:
The degree of freedom relates to our capability to observe or model the energy paths within the system. Higher the number of energy paths being modeled leaves to us a higher degree of freedom, but increasing the time and modeling complexity rendering it useless for today’s world’s need for minimum time to market. Since the number of residuals that can be uniquely isolated are dependent on the number of independent outputs of the system, increasing the number of sensors required. The examples of discrete position sensors that may be used to form an array include limit switches, Hall effect sensors, optical sensors, magnetic sensors, etc. Their mechanical design can usually be tailored to fit in the transitional path of an STME in a variety of mechanical configurations. The case studies into multi-sensor system were carried out and actual data from sensors is used to test this generic framework. It is being investigated, how the proper modeling of limit switches as timing sensors, could lead to unified and neutral residual space while keeping the implementation cost reasonably low.Keywords: low-cost limit sensors, fault diagnostics, Single Throw Mechanical Equipment (STME), parameter estimation, parity-space
Procedia PDF Downloads 6174642 Comparison of Medical Students Evaluation by Serious Games and Clinical Case-Multiple Choice Questions
Authors: Chamtouri I., Kechida M.
Abstract:
Background: Evaluation has a prominent role in medical education and graduation. This evaluation has usually done in face-to-face, by written or oral questions. Simulation is increasingly taking a part as a method of evaluation. Due to the Covid-19 pandemic, which disrupted face-to-face evaluation, simulation using serious games (SG) is emerging in the field of training and assessment of medical students. The aim of our study is to compare the results of the evaluation of medical students by virtual simulation by online serious games versus clinical case-multiple choice questions (MCQ) and to assess the degree of satisfaction from these two evaluation methods. Methods: Medical students from the same study level were voluntarily participated in this study. Groupe 1 had an evaluation by SG dealing with “diagnosis and management of ST-segment elevationmyocardialinfarction (STEMI)alreadyprepared on the website www.Mediactiv.com. Groupe 2 were evaluated by clinical case-MCQ having thes same topic as SG. Results of the two groups were compared. Satisfaction questionnaire was filled by the two groups. Satisfaction degree was compared between the two groups. Results. In this study, 64 medical students (G1:31 and G2: 33) were enrolled. Obtaining complete notes in the "questioning" and "clinical examination" parts is significantly more important in-group 1 compared to group 2. No significant difference detected between the two groups in terms of “ECG interpretation” and “diagnosis of STEMI” parts. A greater number of students of group 1 obtained the full note compared to group 2 in “the initial treatment part” (54.8% vs. 39.4%; p = 0.04). Thirty learners (96.8%) in-group 1 obtained a total score ≥ 50% versus 69.7% in-group 2 (p = 0.004). The full score of 100% was obtained in three learners in-group1, while no student scored 100% in-group2 (p = 0.027). Medical evaluation using SG was reported as more innovative, fun, and realistic compared to evaluation by clinical case-MCQ. No significant difference detected between the two methods in terms of stress. Conclusion: Simulation by SG can be considered as an innovative and effective method in evaluating medical students with a higher degree of satisfaction.Keywords: evaluation, serious games, medical students, satisfaction
Procedia PDF Downloads 1424641 Chemically Enhanced Primary Treatment: Full Scale Trial Results Conducted at a South African Wastewater Works
Authors: Priyanka Govender, S. Mtshali, Theresa Moonsamy, Zanele Mkwanazi, L. Mthembu
Abstract:
Chemically enhanced primary treatment (CEPT) can be used at wastewater works to improve the quality of the final effluent discharge, provided that the plant has spare anaerobic digestion capacity. CEPT can transfer part of the organic load to the digesters thereby effectively relieving the hydraulic loading on the plant and in this way can allow the plant to continue operating long after the hydraulic capacity of the plant has been exceeded. This can allow a plant to continue operating well beyond its original design capacity, requiring only fairly simple and inexpensive modifications to the primary settling tanks as well as additional chemical costs, thereby delaying or even avoiding the need for expensive capital upgrades. CEPT can also be effective at plants where high organic loadings prevent the wastewater discharge from meeting discharge standards, especially in the case of COD, phosphates and suspended solids. By increasing removals of these pollutants in the primary settling tanks, CEPT can enable the plant to conform to specifications without the need for costly upgrades. Laboratory trials were carried out recently at the Umbilo WWTW in Durban and these were followed by a baseline assessment of the current plant performance and a subsequent full scale trial on the Conventional plant i.e. West Plant. The operating conditions of the plant are described and the improvements obtained in COD, phosphate and suspended solids, are discussed. The PST and plant overall suspended solids removal efficiency increased by approximately 6% during the trial. Details regarding the effect that CEPT had on sludge production and the digesters are also provided. The cost implications of CEPT are discussed in terms of capital costs as well as operation and maintenance costs and the impact of Ferric chloride on the infrastructure was also studied and found to be minimal. It was concluded that CEPT improves the final quality of the discharge effluent, thereby improving the compliance of this effluent with the discharge license. It could also allow for a delay in upgrades to the plant, allowing the plant to operate above its design capacity. This will be elaborated further upon presentation.Keywords: chemically enhanced, ferric, wastewater, primary
Procedia PDF Downloads 3014640 Player Experience: A Research on Cross-Platform Supported Games
Authors: Salih Akkemik
Abstract:
User Experience has a characterized perspective based on two fundamentals: the usage process and the product. Digital games can be considered as a special interactive system. This system has a very specific purpose and this is to make the player feel good while playing. At this point, Player Experience (PX) and User Experience (UX) are similar. UX focuses on the user feels good, PX focuses on the player feels good. The most important difference between the two is the action taken. These are actions of using and playing. In this study, the player experience will be examined primarily. PX may differ on different platforms. Nowadays, companies are releasing the successful and high-income games that they have developed with cross-platform support. Cross-platform is the most common expression that an application can run on different operating systems, in other words, be developed to support different operating systems. In terms of digital games, cross-platform support means that a game can be played on a computer, console or mobile device environment, more specifically, the game developed is designed and programmed to be played in the same way on at least two different platforms, such as Windows, MacOS, Linux, iOS, Android, Orbis OS or Xbox OS. Different platforms also accommodate different player groups, profiles and preferences. This study aims to examine these different player profiles in terms of player experience and to determine the effects of cross-platform support on player experience.Keywords: cross-platform, digital games, player experience, user experience
Procedia PDF Downloads 2064639 On the Use of Machine Learning for Tamper Detection
Authors: Basel Halak, Christian Hall, Syed Abdul Father, Nelson Chow Wai Kit, Ruwaydah Widaad Raymode
Abstract:
The attack surface on computing devices is becoming very sophisticated, driven by the sheer increase of interconnected devices, reaching 50B in 2025, which makes it easier for adversaries to have direct access and perform well-known physical attacks. The impact of increased security vulnerability of electronic systems is exacerbated for devices that are part of the critical infrastructure or those used in military applications, where the likelihood of being targeted is very high. This continuously evolving landscape of security threats calls for a new generation of defense methods that are equally effective and adaptive. This paper proposes an intelligent defense mechanism to protect from physical tampering, it consists of a tamper detection system enhanced with machine learning capabilities, which allows it to recognize normal operating conditions, classify known physical attacks and identify new types of malicious behaviors. A prototype of the proposed system has been implemented, and its functionality has been successfully verified for two types of normal operating conditions and further four forms of physical attacks. In addition, a systematic threat modeling analysis and security validation was carried out, which indicated the proposed solution provides better protection against including information leakage, loss of data, and disruption of operation.Keywords: anti-tamper, hardware, machine learning, physical security, embedded devices, ioT
Procedia PDF Downloads 1534638 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 2304637 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 2974636 Numerical Studies on Bypass Thrust Augmentation Using Convective Heat Transfer in Turbofan Engine
Authors: R. Adwaith, J. Gopinath, Vasantha Kohila B., R. Chandru, Arul Prakash R.
Abstract:
The turbofan engine is a type of air breathing engine that is widely used in aircraft propulsion produces thrust mainly from the mass-flow of air bypassing the engine core. The present research has developed an effective method numerically by increasing the thrust generated from the bypass air. This thrust increase is brought about by heating the walls of the bypass valve from the combustion chamber using convective heat transfer method. It is achieved computationally by the use external heat to enhance the velocity of bypass air of turbofan engines. The bypass valves are either heated externally using multicell tube resistor which convert electricity generated by dynamos into heat or heat is transferred from the combustion chamber. This increases the temperature of the flow in the valves and thereby increase the velocity of the flow that enters the nozzle of the engine. As a result, mass-flow of air passing the core engine for producing more thrust can be significantly reduced thereby saving considerable amount of Jet fuel. Numerical analysis has been carried out on a scaled down version of a typical turbofan bypass valve, where the valve wall temperature has been increased to 700 Kelvin. It is observed from the analysis that, the exit velocity contributing to thrust has significantly increased by 10 % due to the heating of by-pass valve. The degree of optimum increase in the temperature, and the corresponding effect in the increase of jet velocity is calculated to determine the operating temperature range for efficient increase in velocity. The technique used in the research increases the thrust by using heated by-pass air without extracting much work from the fuel and thus improve the efficiency of existing turbofan engines. Dimensional analysis has been carried to prove the accuracy of the results obtained numerically.Keywords: turbofan engine, bypass valve, multi-cell tube, convective heat transfer, thrust
Procedia PDF Downloads 3584635 Natural Forest Ecosystem Services and Local Populations
Authors: Mohammed Sghir Taleb
Abstract:
Located at the northwest corner of the African continent between 21 ° and 36 ° north latitude and between the 1st and the 17th degree of west longitude, Morocco, with a total area of 715,000 km², enjoys a privileged position with a coastline of 3 446 km long opening to the Mediterranean and the Atlantic Ocean. Its privileged location with a double coastline and its diverse mountain with four major mountain ranges: the Rif, Middle Atlas, High Atlas, and Anti Atlas, with altitudes exceeding 2000 m in the Rif, 3000 m in the Middle Atlas, and 4000 m in the High Atlas. Morocco is characterized by an important forest genetic diversity represented by a rich and varied flora and many ecosystems: forest, preforest, presteppe, steppe, Sahara that spans a range of bioclimatic zones: arid, semiarid, subhumid, and humid. The vascular flora of Morocco is rich and highly diversified, with a very significant degree of endemism. Natural flora and ecosystems provide important services to populations represented by grazing, timber harvest, harvesting of medicinal and aromatic plants. This work will be focused on the Moroccan biodiversity and natural ecosystem services and on the interaction between local populations and ecosystemsKeywords: biodiversity, forest, ecosystem, services, Morocco
Procedia PDF Downloads 854634 Woman, House, Identity: The Study of the Role of House in Constructing the Contemporary Dong Minority Woman’s Identity
Authors: Sze Wai Veera Fung, Peter W. Ferretto
Abstract:
Similar to most ethnic groups in China, men of the Dong minority hold the primary position in policymaking, moral authority, social values, and the control of the property. As the spatial embodiment of the patriarchal ideals, the house plays a significant role in producing and reproducing the distinctive gender status within the Dong society. Nevertheless, Dong women do not see their home as a cage of confinement, nor do they see themselves as a victim of oppression. For these women with reference to their productive identity, a house is a dwelling place with manifold meanings, including a proof of identity, an economic instrument, and a public resource operating on the community level. This paper examines the role of the house as a central site for identity construction and maintenance for the southern dialect Dong minority women in Hunan, China. Drawing on recent interviews with the Dong women, this study argues that women as productive individuals have a strong influence on the form of their house and the immediate environment, regardless of the male-dominated social construct of the Dong society. The aim of this study is not to produce a definitive relationship between women, house, and identity. Rather, it seeks to offer an alternative lens into the complexity and diversity of gender dynamics operating in and beyond the boundary of the house in the context of contemporary rural China.Keywords: conception of home, Dong minority, house, rural China, woman’s identity
Procedia PDF Downloads 1384633 Analysis of Thermal Damage Characteristics of High Pressure Turbine Blade According to Off-Design Operating Conditions
Authors: Seon Ho Kim, Minho Bang, Seok Min Choi, Young Moon Lee, Dong Kwan Kim, Hyung Hee Cho
Abstract:
Gas turbines are heat engines that convert chemical energy into electrical energy through mechanical energy. Since their high energy density per unit volume and low pollutant emissions, gas turbines are classified as clean energy. In order to obtain better performance, the turbine inlet temperature of the current gas turbine is operated at about 1600℃, and thermal damage is a very serious problem. Especially, these thermal damages are more prominent in off-design conditions than in design conditions. In this study, the thermal damage characteristics of high temperature components of a gas turbine made of a single crystal material are studied numerically for the off-design operating conditions. The target gas turbine is configured as a reheat cycle and is operated in peak load operation mode, not normal operation. In particular, the target gas turbine features a lot of low-load operation. In this study, a commercial code, ANSYS 18.2, was used for analyzing the thermal-flow coupling problems. As a result, the flow separation phenomenon on the pressure side due to the flow reduction was remarkable at the off-design condition, and the high heat transfer coefficient at the upper end of the suction surface due to the tip leakage flow was appeared.Keywords: gas turbine, single crystal blade, off-design, thermal analysis
Procedia PDF Downloads 2134632 Assessment of Educational Service Quality at Master's Level in an Iranian University Using Based on HEdPERF Model
Authors: Faranak Omidian
Abstract:
The aim of this research was to examine the quality of education service at master's level in the Islamic Azad University of Dezful. In terms of objective, this is an applied research and in regard to methodology, it is a descriptive analytical research. The statistical population included all students of master's degree in the Islamic Azad University of Dezful. The sample size was determined using stratified random sampling method in different fields of study. The research questionnaire is the translated version of standardized Abdullah's HEdPERF 41-item scale which is based on a 5-point Likert scale. In order to determine the validity, the translated questionnaire was given to the professors of educational sciences. The correlation among all questions has been regarded at a value of 0.644. The results showed that the quality of educational service at master's level in this university, based on chi-square goodness of fit test, was equal to 73.36 and its degree of freedom was 2 at a significant level of 0.001, indicating the low desirability of the services. According to Friedman test, academic responsiveness has been reported to be in a higher status than other dimensions with an average rank of 3.94 while accessibility, with an average rank of 2.15, has been in the lowest status from master's students' viewpoint.Keywords: educational service quality, master's level, Iranian university
Procedia PDF Downloads 2804631 Corporate Governance and Firm Performance: Empirical Evidence from India
Authors: G. C. Surya Bahadur, Ranjana Kothari
Abstract:
The paper attempts to analyze linkages between corporate governance and firm performance in India. The study employs a panel data of 50 Nifty companies from 2008 to 2012. Using LSDV panel data model and 2SLS model the study reveals that that good corporate governance practices adopted by companies is positively related with financial performance. Board independence, number of board committees and executive compensation are found to have positive relationship while ownership by promoters and financial leverage have negative relationship with performance. There is existence of bi-directional relationship between corporate governance and financial performance. Companies with sound financial performance are more likely to conform to corporate governance norms and standards and implement sound corporate governance system. The findings indicate that companies can enhance business performance and sustainability by embracing sound corporate governance practices.Keywords: board structure, corporate governance, executive compensation, ownership structure
Procedia PDF Downloads 475