Search results for: process model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28060

Search results for: process model

16630 Towards the Integration of a Micro Pump in μTAS

Authors: Y. Haik

Abstract:

The objective of this study is to present a micro mechanical pump that was fabricated using SwIFT™ microfabrication surface micromachining process and to demonstrate the feasibility of integrating such micro pump into a micro analysis system. The micropump circulates the bio-sample and magnetic nanoparticles through different compartments to separate and purify the targeted bio-sample. This article reports the flow characteristics in the microchannels and in a crescent micro pump.

Keywords: crescent micropumps, microanalysis, nanoparticles, MEMS

Procedia PDF Downloads 209
16629 Assessment of Water Reuse Potential in a Metal Finishing Factory

Authors: Efe Gumuslu, Guclu Insel, Gülten Yuksek, Nilay Sayi Ucar, Emine Ubay Cokgor, Tuğba Olmez Hanci, Didem Okutman Tas, Fatoş Germirli Babuna, Derya Firat Ertem, Ökmen Yildirim, Özge Erturan, Betül Kirci

Abstract:

Although water reclamation and reuse are inseparable parts of sustainable production concept all around the world, current levels of reuse constitute only a small fraction of the total volume of industrial effluents. Nowadays, within the perspective of serious climate change, wastewater reclamation and reuse practices should be considered as a requirement. Industrial sector is one of the largest users of water sources. The OECD Environmental Outlook to 2050 predicts that global water demand for manufacturing will increase by 400% from 2000 to 2050 which is much larger than any other sector. Metal finishing industry is one of the industries that requires high amount of water during the manufacturing. Therefore, actions regarding the improvement of wastewater treatment and reuse should be undertaken on both economic and environmental sustainability grounds. Process wastewater can be reused for more purposes if the appropriate treatment systems are installed to treat the wastewater to the required quality level. Recent studies showed that membrane separation techniques may help in solving the problem of attaining a suitable quality of water that allows being recycled back to the process. The metal finishing factory where this study is conducted is one of the biggest white-goods manufacturers in Turkey. The sheet metal parts used in the cookers production have to be exposed to surface pre-treatment processes composed of degreasing, rinsing, nanoceramics coating and deionization rinsing processes, consecutively. The wastewater generating processes in the factory are enamel coating, painting and styrofoam processes. In the factory, the main source of water is the well water. While some part of the well water is directly used in the processes after passing through resin treatment, some portion of it is directed to the reverse osmosis treatment to obtain required water quality for enamel coating and painting processes. In addition to these processes another important source of water that can be considered as a potential water source is rainwater (3660 tons/year). In this study, process profiles as well as pollution profiles were assessed by a detailed quantitative and qualitative characterization of the wastewater sources generated in the factory. Based on the preliminary results the main water sources that can be considered for reuse in the processes were determined as painting and styrofoam processes.

Keywords: enamel coating, painting, reuse, wastewater

Procedia PDF Downloads 370
16628 Potential and Techno-Economic Analysis of Hydrogen Production from Portuguese Solid Recovered Fuels

Authors: A. Ribeiro, N. Pacheco, M. Soares, N. Valério, L. Nascimento, A. Silva, C. Vilarinho, J. Carvalho

Abstract:

Hydrogen will play a key role in changing the current global energy paradigm, associated with the high use of fossil fuels and the release of greenhouse gases. This work intended to identify and quantify the potential of Solid Recovered Fuels (SFR) existing in Portugal and project the cost of hydrogen, produced through its steam gasification in different scenarios, associated with the size or capacity of the plant and the existence of carbon capture and storage (CCS) systems. Therefore, it was performed a techno-economic analysis simulation using an ASPEN base model, the H2A Hydrogen Production Model Version 3.2018. Regarding the production of SRF, it was possible to verify the annual production of more than 200 thousand tons of SRF in Portugal in 2019. The results of the techno-economic analysis simulations showed that in the scenarios containing a high (200,000 tons/year) and medium (40,000 tons/year) amount of SFR, the cost of hydrogen production was competitive concerning the current prices of hydrogen. The results indicate that scenarios 1 and 2, which use 200,000 tons of SRF per year, have lower hydrogen production values, 1.22 USD/kg H2 and 1.63 USD/kg H2, respectively. The cost of producing hydrogen without carbon capture and storage (CCS) systems in an average amount of SFR (40,000 tons/year) was 1.70 USD/kg H2. In turn, scenarios 5 (without CCS) and 6 (with CCS), which use only 683 tons of SFR from urban sources, have the highest costs, 6.54 USD/kg H2 and 908.97 USD/kg H2, respectively. Therefore, it was possible to conclude that there is a huge potential for the use of SRF for the production of hydrogen through steam gasification in Portugal.

Keywords: gasification, hydrogen, solid recovered fuels, techno-economic analysis, waste-to-energy

Procedia PDF Downloads 116
16627 Controlled Release of Glucosamine from Pluronic-Based Hydrogels for the Treatment of Osteoarthritis

Authors: Papon Thamvasupong, Kwanchanok Viravaidya-Pasuwat

Abstract:

Osteoarthritis affects a lot of people worldwide. Local injection of glucosamine is one of the alternative treatment methods to replenish the natural lubrication of cartilage. However, multiple injections can potentially lead to possible bacterial infection. Therefore, a drug delivery system is desired to reduce the frequencies of injections. A hydrogel is one of the delivery systems that can control the release of drugs. Thermo-reversible hydrogels can be beneficial to the drug delivery system especially in the local injection route because this formulation can change from liquid to gel after getting into human body. Once the gel is in the body, it will slowly release the drug in a controlled manner. In this study, various formulations of Pluronic-based hydrogels were synthesized for the controlled release of glucosamine. One of the challenges of the Pluronic controlled release system is its fast dissolution rate. To overcome this problem, alginate and calcium sulfate (CaSO4) were added to the polymer solution. The characteristics of the hydrogels were investigated including the gelation temperature, gelation time, hydrogel dissolution and glucosamine release mechanism. Finally, a mathematical model of glucosamine release from Pluronic-alginate-hyaluronic acid hydrogel was developed. Our results have shown that crosslinking Pluronic gel with alginate did not significantly extend the dissolution rate of the gel. Moreover, the gel dissolution profiles and the glucosamine release mechanisms were best described using the zeroth-order kinetic model, indicating that the release of glucosamine was primarily governed by the gel dissolution.

Keywords: controlled release, drug delivery system, glucosamine, pluronic, thermoreversible hydrogel

Procedia PDF Downloads 264
16626 Evaluation System of Spatial Potential Under Bridges in High Density Urban Areas of Chongqing Municipality and Applied Research on Suitability

Authors: Xvelian Qin

Abstract:

Urban "organic renewal" based on the development of existing resources in high-density urban areas has become the mainstream of urban development in the new era. As an important stock resource of public space in high-density urban areas, promoting its value remodeling is an effective way to alleviate the shortage of public space resources. However, due to the lack of evaluation links in the process of underpass space renewal, a large number of underpass space resources have been left idle, facing the problems of low space conversion efficiency, lack of accuracy in development decision-making, and low adaptability of functional positioning to citizens' needs. Therefore, it is of great practical significance to construct the evaluation system of under-bridge space renewal potential and explore the renewal mode. In this paper, some of the under-bridge spaces in the main urban area of Chongqing are selected as the research object. Through the questionnaire interviews with the users of the built excellent space under the bridge, three types of six levels and twenty-two potential evaluation indexes of "objective demand factor, construction feasibility factor and construction suitability factor" are selected, including six levels of land resources, infrastructure, accessibility, safety, space quality and ecological environment. The analytical hierarchy process and expert scoring method are used to determine the index weight, construct the potential evaluation system of the space under the bridge in high-density urban areas of Chongqing, and explore the direction of renewal and utilization of its suitability.

Keywords: space under bridge, potential evaluation, high density urban area, updated using

Procedia PDF Downloads 69
16625 Mining User-Generated Contents to Detect Service Failures with Topic Model

Authors: Kyung Bae Park, Sung Ho Ha

Abstract:

Online user-generated contents (UGC) significantly change the way customers behave (e.g., shop, travel), and a pressing need to handle the overwhelmingly plethora amount of various UGC is one of the paramount issues for management. However, a current approach (e.g., sentiment analysis) is often ineffective for leveraging textual information to detect the problems or issues that a certain management suffers from. In this paper, we employ text mining of Latent Dirichlet Allocation (LDA) on a popular online review site dedicated to complaint from users. We find that the employed LDA efficiently detects customer complaints, and a further inspection with the visualization technique is effective to categorize the problems or issues. As such, management can identify the issues at stake and prioritize them accordingly in a timely manner given the limited amount of resources. The findings provide managerial insights into how analytics on social media can help maintain and improve their reputation management. Our interdisciplinary approach also highlights several insights by applying machine learning techniques in marketing research domain. On a broader technical note, this paper illustrates the details of how to implement LDA in R program from a beginning (data collection in R) to an end (LDA analysis in R) since the instruction is still largely undocumented. In this regard, it will help lower the boundary for interdisciplinary researcher to conduct related research.

Keywords: latent dirichlet allocation, R program, text mining, topic model, user generated contents, visualization

Procedia PDF Downloads 184
16624 Valorization of Surveillance Data and Assessment of the Sensitivity of a Surveillance System for an Infectious Disease Using a Capture-Recapture Model

Authors: Jean-Philippe Amat, Timothée Vergne, Aymeric Hans, Bénédicte Ferry, Pascal Hendrikx, Jackie Tapprest, Barbara Dufour, Agnès Leblond

Abstract:

The surveillance of infectious diseases is necessary to describe their occurrence and help the planning, implementation and evaluation of risk mitigation activities. However, the exact number of detected cases may remain unknown whether surveillance is based on serological tests because identifying seroconversion may be difficult. Moreover, incomplete detection of cases or outbreaks is a recurrent issue in the field of disease surveillance. This study addresses these two issues. Using a viral animal disease as an example (equine viral arteritis), the goals were to establish suitable rules for identifying seroconversion in order to estimate the number of cases and outbreaks detected by a surveillance system in France between 2006 and 2013, and to assess the sensitivity of this system by estimating the total number of outbreaks that occurred during this period (including unreported outbreaks) using a capture-recapture model. Data from horses which exhibited at least one positive result in serology using viral neutralization test between 2006 and 2013 were used for analysis (n=1,645). Data consisted of the annual antibody titers and the location of the subjects (towns). A consensus among multidisciplinary experts (specialists in the disease and its laboratory diagnosis, epidemiologists) was reached to consider seroconversion as a change in antibody titer from negative to at least 32 or as a three-fold or greater increase. The number of seroconversions was counted for each town and modeled using a unilist zero-truncated binomial (ZTB) capture-recapture model with R software. The binomial denominator was the number of horses tested in each infected town. Using the defined rules, 239 cases located in 177 towns (outbreaks) were identified from 2006 to 2013. Subsequently, the sensitivity of the surveillance system was estimated as the ratio of the number of detected outbreaks to the total number of outbreaks that occurred (including unreported outbreaks) estimated using the ZTB model. The total number of outbreaks was estimated at 215 (95% credible interval CrI95%: 195-249) and the surveillance sensitivity at 82% (CrI95%: 71-91). The rules proposed for identifying seroconversion may serve future research. Such rules, adjusted to the local environment, could conceivably be applied in other countries with surveillance programs dedicated to this disease. More generally, defining ad hoc algorithms for interpreting the antibody titer could be useful regarding other human and animal diseases and zoonosis when there is a lack of accurate information in the literature about the serological response in naturally infected subjects. This study shows how capture-recapture methods may help to estimate the sensitivity of an imperfect surveillance system and to valorize surveillance data. The sensitivity of the surveillance system of equine viral arteritis is relatively high and supports its relevance to prevent the disease spreading.

Keywords: Bayesian inference, capture-recapture, epidemiology, equine viral arteritis, infectious disease, seroconversion, surveillance

Procedia PDF Downloads 290
16623 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria

Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter

Abstract:

Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.

Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis

Procedia PDF Downloads 67
16622 Effect of Quenching Medium on the Hardness of Dual Phase Steel Heat Treated at a High Temperature

Authors: Tebogo Mabotsa, Tamba Jamiru, David Ibrahim

Abstract:

Dual phase(DP) steel consists essentially of fine grained equiaxial ferrite and a dispersion of martensite. Martensite is the primary precipitate in DP steels, it is the main resistance to dislocation motion within the material. The objective of this paper is to present a relation between the intercritical annealing holding time and the hardness of a dual phase steel. The initial heat treatment involved heating the specimens to 1000oC and holding the sample at that temperature for 30 minutes. After the initial heat treatment, the samples were heated to 770oC and held for a varying amount of time at constant temperature. The samples were held at 30, 60, and 90 minutes respectively. After heating and holding the samples at the austenite-ferrite phase field, the samples were quenched in water, brine, and oil for each holding time. The experimental results proved that an equation for predicting the hardness of a dual phase steel as a function of the intercritical holding time is possible. The relation between intercritical annealing holding time and hardness of a dual phase steel heat treated at high temperatures is parabolic in nature. Theoretically, the model isdependent on the cooling rate because the model differs for each quenching medium; therefore, a universal hardness equation can be derived where the cooling rate is a variable factor.

Keywords: quenching medium, annealing temperature, dual phase steel, martensite

Procedia PDF Downloads 76
16621 Characterization of InGaAsP/InP Quantum Well Lasers

Authors: K. Melouk, M. Dellakrachaï

Abstract:

Analytical formula for the optical gain based on a simple parabolic-band by introducing theoretical expressions for the quantized energy is presented. The model used in this treatment take into account the effects of intraband relaxation. It is shown, as a result, that the gain for the TE mode is larger than that for TM mode and the presence of acceptor impurity increase the peak gain.

Keywords: InGaAsP, laser, quantum well, semiconductor

Procedia PDF Downloads 366
16620 Computational Fluid Dynamics Simulation of Reservoir for Dwell Time Prediction

Authors: Nitin Dewangan, Nitin Kattula, Megha Anawat

Abstract:

Hydraulic reservoir is the key component in the mobile construction vehicles; most of the off-road earth moving construction machinery requires bigger side hydraulic reservoirs. Their reservoir construction is very much non-uniform and designers used such design to utilize the space available under the vehicle. There is no way to find out the space utilization of the reservoir by oil and validity of design except virtual simulation. Computational fluid dynamics (CFD) helps to predict the reservoir space utilization by vortex mapping, path line plots and dwell time prediction to make sure the design is valid and efficient for the vehicle. The dwell time acceptance criteria for effective reservoir design is 15 seconds. The paper will describe the hydraulic reservoir simulation which is carried out using CFD tool acuSolve using automated mesh strategy. The free surface flow and moving reference mesh is used to define the oil flow level inside the reservoir. The first baseline design is not able to meet the acceptance criteria, i.e., dwell time below 15 seconds because the oil entry and exit ports were very close. CFD is used to redefine the port locations for the reservoir so that oil dwell time increases in the reservoir. CFD also proposed baffle design the effective space utilization. The final design proposed through CFD analysis is used for physical validation on the machine.

Keywords: reservoir, turbulence model, transient model, level set, free-surface flow, moving frame of reference

Procedia PDF Downloads 143
16619 Polarization of Lithuanian Society on Issues Related to Language Politics

Authors: Eglė Žurauskaitė, Eglė Gudavičienė

Abstract:

The goal of this paper is to reveal how polarization is constructed through the use of impoliteness strategies. In general, media helps to spread various ideas very fast, and it means that processes of polarization are best revealed in computer-mediated communication (CMC) contexts. For this reason, data for the research was collected from online texts about a current, very diverse topic in Lithuania - Lithuanian language policy and regulations, because this topic is causing a lot of tension in Lithuanian society. Computer-mediated communication allows users to edit their message before they send it. It means that addressees carefully select verbal expressions to convey their message. In other words, each impoliteness strategy and its verbal expression were created intentionally. Impoliteness strategies in this research are understood as various ways to reach a communicative goal: belittle the other. To reach the goal, the public opinions of various Lithuanian public figures (e. g., cultural people, politicians, officials) were collected from new portals in 2019–2023 and analyzed using both quantitative and qualitative approaches. First, problematic aspects of the language policy, for which public figures complain, were identified. Then instances when public figures take a defensive position were analyzed: how they express this position and what it reveals about Lithuanian culture. Findings of this research demonstrate how concepts of impoliteness theory can be applied in analyzing the process of polarization in Lithuanian society on issues related to the State language policy. Also, to reveal how polarization is constructed, these tasks were set: a) determine which impoliteness strategies are used throughout the process of creating polarization, b) analyze how they were expressed verbally (e. g., as an advice, offer, etc.).

Keywords: impoliteness, Lithuanian language policy, polarization, impoliteness strategies

Procedia PDF Downloads 49
16618 Economic Assessment of CO2-Based Methane, Methanol and Polyoxymethylene Production

Authors: Wieland Hoppe, Nadine Wachter, Stefan Bringezu

Abstract:

Carbon dioxide (CO2) utilization might be a promising way to substitute fossil raw materials like coal, oil or natural gas as carbon source of chemical production. While first life cycle assessments indicate a positive environmental performance of CO2-based process routes, a commercialization of CO2 is limited by several economic obstacles up to now. We, therefore, analyzed the economic performance of the three CO2-based chemicals methane and methanol as basic chemicals and polyoxymethylene as polymer on a cradle-to-gate basis. Our approach is oriented towards life cycle costing. The focus lies on the cost drivers of CO2-based technologies and options to stimulate a CO2-based economy by changing regulative factors. In this way, we analyze various modes of operation and give an outlook for the potentially cost-effective development in the next decades. Biogas, waste gases of a cement plant, and flue gases of a waste incineration plant are considered as CO2-sources. The energy needed to convert CO2 into hydrocarbons via electrolysis is assumed to be supplied by wind power, which is increasingly available in Germany. Economic data originates from both industrial processes and process simulations. The results indicate that CO2-based production technologies are not competitive with conventional production methods under present conditions. This is mainly due to high electricity generation costs and regulative factors like the German Renewable Energy Act (EEG). While the decrease in production costs of CO2-based chemicals might be limited in the next decades, a modification of relevant regulative factors could potentially promote an earlier commercialization.

Keywords: carbon capture and utilization (CCU), economic assessment, life cycle costing (LCC), power-to-X

Procedia PDF Downloads 283
16617 Evaluating and Reducing Aircraft Technical Delays and Cancellations Impact on Reliability Operational: Case Study of Airline Operator

Authors: Adel A. Ghobbar, Ahmad Bakkar

Abstract:

Although special care is given to maintenance, aircraft systems fail, and these failures cause delays and cancellations. The occurrence of Delays and Cancellations affects operators and manufacturers negatively. To reduce technical delays and cancellations, one should be able to determine the important systems causing them. The goal of this research is to find a method to define the most expensive delays and cancellations systems for Airline operators. A predictive model was introduced to forecast the failure and their impact after carrying out research that identifies relevant information to tackle the problems faced while answering the questions of this paper. Data were obtained from the manufacturers’ services reliability team database. Subsequently, delays and cancellations evaluation methods were identified. No cost estimation methods were used due to their complexity. The model was developed, and it takes into account the frequency of delays and cancellations and uses weighting factors to give an indication of the severity of their duration. The weighting factors are based on customer experience. The data Analysis approach has shown that delays and cancellations events are not seasonal and do not follow any specific trends. The use of weighting factor does have an influence on the shortlist over short periods (Monthly) but not the analyzed period of three years. Landing gear and the navigation system are among the top 3 factors causing delays and cancellations for all three aircraft types. The results did confirm that the cooperation between certain operators and manufacture reduce the impact of delays and cancellations.

Keywords: reliability, availability, delays & cancellations, aircraft maintenance

Procedia PDF Downloads 125
16616 Final Account Closing in Construction Project: The Use of Supply Chain Management to Reduce the Delays

Authors: Zarabizan Zakaria, Syuhaida Ismail, Aminah Md. Yusof

Abstract:

Project management process starts from the planning stage up to the stage of completion (handover of buildings, preparation of the final accounts and the closing balance). This process is not easy to implement efficiently and effectively. The issue of delays in construction is a major problem for construction projects. These delays have been blamed mainly on inefficient traditional construction practices that continue to dominate the current industry. This is due to several factors, such as environments of construction technology, sophisticated design and customer demands that are constantly changing and influencing, either directly or indirectly, the practice of management. Among the identified influences are physical environment, social environment, information environment, political and moral atmosphere. Therefore, this paper is emerged to determine the problem and issues in the final account closing in construction projects, and it establishes the need to embrace Supply Chain Management (SCM) and then elucidates the need and strategies for the development of a delay reduction framework. At the same time, this paper provides effective measures to avoid or at least reduce the delay to the optimum level. Allowing problems in the closure declaration to occur without proper monitoring and control can leave negative impact on the cost and time of delivery to the end user. Besides, it can also affect the reputation or image of the agency/department that manages the implementation of a contract and consequently may reduce customer's trust towards the agencies/departments. It is anticipated that the findings reported in this paper could address root delay contributors and apply SCM tools for their mitigation for the better development of construction project.

Keywords: final account closing, construction project, construction delay, supply chain management

Procedia PDF Downloads 357
16615 Prevalence of Cerebral Microbleeds in Apparently Healthy, Elderly Population: A Meta-Analysis

Authors: Vidishaa Jali, Amit Sinha, Kameshwar Prasad

Abstract:

Background and Objective: Cerebral microbleeds are frequently found in healthy elderly individuals. We performed a meta- analysis to determine the prevalence of cerebral microbleeds in apparently healthy, elderly population and to determine the effect of age, smoking and hypertension on the occurrence of cerebral microbleeds. Methods: Relevant literature was searched using electronic databases such as MEDLINE, EMBASE, PubMed, Cochrane database, Google scholar to identify studies on the prevalence of cerebral microbleeds in general elderly population till March 2016. STATA version 13 software was used for analysis. Fixed effect model was used if heterogeneity was less than 50%. Otherwise, random effect model was used. Meta- regression analysis was performed to check any effect of important variables such as age, smoking, hypertension. Selection Criteria: We included cross-sectional studies performed in apparently healthy elderly population, who had age more than 50 years. Results: The pooled proportion of cerebral microbleeds in healthy population is 12% (95% CI, 0.11 to 0.13). No significant effect of age was found on the prevalence of cerebral microbleeds (p= 0.99). A linear relationship between increase in hypertension and the prevalence of cerebral microbleeds was found, however, this linear relationship was not statistically significant (p=0.16). Similarly, A linear relationship between increase in smoking and the prevalence of cerebral microbleeds was found, however, this linear relationship was also not statistically significant (p=0.21). Conclusion: Presence of cerebral microbleeds is evident in apparently healthy, elderly population, in more than 10% of individuals.

Keywords: apparently healthy, elderly, prevalence, cerebral microbleeds

Procedia PDF Downloads 283
16614 The Impact of Mergers and Acquisitions on Financial Deepening in the Nigerian Banking Sector

Authors: Onyinyechi Joy Kingdom

Abstract:

Mergers and Acquisitions (M&A) have been proposed as a mechanism through which, problems associated with inefficiency or poor performance in financial institution could be addressed. The aim of this study is to examine the proposition that recapitalization of banks, which encouraged Mergers and Acquisitions in Nigeria banking system, would strengthen the domestic banks, improve financial deepening and the confidence of depositors. Hence, this study examines the impact of the 2005 M&A in the Nigerian-banking sector on financial deepening using mixed method (quantitative and qualitative approach). The quantitative process of this study utilised annual time series for financial deepening indicator for the period of 1997 to 2012. While, the qualitative aspect adopted semi-structured interview to collate data from three merged banks and three stand-alone banks to explore, understand and complement the quantitative results. Furthermore, a framework thematic analysis is employed to analyse the themes developed using NVivo 11 software. Using the quantitative approach, findings from the equality of mean test (EMT) used suggests that M&A have significant impact on financial deepening. However, this method is not robust enough given its weak validity as it does not control for other potential factors that may determine financial deepening. Thus, to control for other factors that may affect the level of financial deepening, a Multiple Regression Model (MRM) and Interrupted Times Series Analysis (ITSA) were applied. The coefficient for M&A dummy turned negative and insignificant using MRM. In addition, the estimated linear trend of the post intervention when ITSA was applied suggests that after M&A, the level of financial deepening decreased annually; however, this was statistically insignificant. Similarly, using the qualitative approach, the results from the interview supported the quantitative results from ITSA and MRM. The result suggests that interest rate should fall when capital base is increased to improve financial deepening. Hence, this study contributes to the existing literature the importance of other factors that may affect financial deepening and the economy when policies that will enhance bank performance and the economy are made. In addition, this study will enable the use of valuable policy instruments relevant to monetary authorities when formulating policies that will strengthen the Nigerian banking sector and the economy.

Keywords: mergers and acquisitions, recapitalization, financial deepening, efficiency, financial crisis

Procedia PDF Downloads 388
16613 Methodology of the Turkey’s National Geographic Information System Integration Project

Authors: Buse A. Ataç, Doğan K. Cenan, Arda Çetinkaya, Naz D. Şahin, Köksal Sanlı, Zeynep Koç, Akın Kısa

Abstract:

With its spatial data reliability, interpretation and questioning capabilities, Geographical Information Systems make significant contributions to scientists, planners and practitioners. Geographic information systems have received great attention in today's digital world, growing rapidly, and increasing the efficiency of use. Access to and use of current and accurate geographical data, which are the most important components of the Geographical Information System, has become a necessity rather than a need for sustainable and economic development. This project aims to enable sharing of data collected by public institutions and organizations on a web-based platform. Within the scope of the project, INSPIRE (Infrastructure for Spatial Information in the European Community) data specifications are considered as a road-map. In this context, Turkey's National Geographic Information System (TUCBS) Integration Project supports sharing spatial data within 61 pilot public institutions as complied with defined national standards. In this paper, which is prepared by the project team members in the TUCBS Integration Project, the technical process with a detailed methodology is explained. In this context, the main technical processes of the Project consist of Geographic Data Analysis, Geographic Data Harmonization (Standardization), Web Service Creation (WMS, WFS) and Metadata Creation-Publication. In this paper, the integration process carried out to provide the data produced by 61 institutions to be shared from the National Geographic Data Portal (GEOPORTAL), have been trying to be conveyed with a detailed methodology.

Keywords: data specification, geoportal, GIS, INSPIRE, Turkish National Geographic Information System, TUCBS, Turkey's national geographic information system

Procedia PDF Downloads 137
16612 An Exploration of Inclusive Education Settings in the Context of Saudi Arabia: Stakeholder Perspectives

Authors: Nourah Alshalhoub

Abstract:

As Saudi Arabia is one of the countries moving toward more inclusive schools, there are few researchers who have examined the new model of inclusive practice; that is, a model introduced by the Tatweer project. Tatweer is an initiative supported by the Saudi government to develop education with a particular focus on inclusion. This on-going doctoral work aims to find out the nature of inclusive practice that Taweer introduced to create effective practice to include students with different abilities. While stakeholders are important elements to the implementation of inclusive education practice, the study’s goal is to find out and explore their understandings and perspectives. This study considers the perspectives of stakeholders, who are involved and influential on the implementation of the practice, from different dimensions. Tatweer project’s managers, head teachers, teachers and teaching assistants will be interviewed to find out how do they understand inclusive education concept and what perspective do they hold. Reliant on this material, this work seeks to inquire into what meaning inclusion and inclusive practice holds in Tatweer and to what extent this educational models let students with different abilities be more included. Four primary schools in Riyadh were purposively selected and data will be collected through semi-structured interviews. Semi-structured interview was selected as a study tool because it is a relevant and helpful method in understanding the thoughts, views, and beliefs of the stakeholders individually, and investigating issues more thoroughly in the context of Saudi Arabia.

Keywords: inclusive education, perspective, understanding, definition, inclusion

Procedia PDF Downloads 288
16611 Bridging Educational Research and Policymaking: The Development of Educational Think Tank in China

Authors: Yumei Han, Ling Li, Naiqing Song, Xiaoping Yang, Yuping Han

Abstract:

Educational think tank is agreeably regarded as significant part of a nation’s soft power to promote the scientific and democratic level of educational policy making, and it plays critical role of bridging educational research in higher institutions and educational policy making. This study explores the concept, functions and significance of educational think tank in China, and conceptualizes a three dimensional framework to analyze the approaches of transforming research-based higher institutions into effective educational think tanks to serve educational policy making in the nation wide. Since 2014, the Ministry of Education P.R. China has been promoting the strategy of developing new type of educational think tanks in higher institutions, and such a strategy has been put into the agenda for the 13th Five Year Plan for National Education Development released in 2017.In such context, increasing scholars conduct studies to put forth strategies of promoting the development and transformation of new educational think tanks to serve educational policy making process. Based on literature synthesis, policy text analysis, and analysis of theories about policy making process and relationship between educational research and policy-making, this study constructed a three dimensional conceptual framework to address the following questions: (a) what are the new features of educational think tanks in the new era comparing traditional think tanks, (b) what are the functional objectives of the new educational think tanks, (c) what are the organizational patterns and mechanism of the new educational think tanks, (d) in what approaches traditional research-based higher institutions can be developed or transformed into think tanks to effectively serve the educational policy making process. The authors adopted case study approach on five influential education policy study centers affiliated with top higher institutions in China and applied the three dimensional conceptual framework to analyze their functional objectives, organizational patterns as well as their academic pathways that researchers use to contribute to the development of think tanks to serve education policy making process.Data was mainly collected through interviews with center administrators, leading researchers and academic leaders in the institutions. Findings show that: (a) higher institution based think tanks mainly function for multi-level objectives, providing evidence, theoretical foundations, strategies, or evaluation feedbacks for critical problem solving or policy-making on the national, provincial, and city/county level; (b) higher institution based think tanks organize various types of research programs for different time spans to serve different phases of policy planning, decision making, and policy implementation; (c) in order to transform research-based higher institutions into educational think tanks, the institutions must promote paradigm shift that promotes issue-oriented field studies, large data mining and analysis, empirical studies, and trans-disciplinary research collaborations; and (d) the five cases showed distinguished features in their way of constructing think tanks, and yet they also exposed obstacles and challenges such as independency of the think tanks, the discourse shift from academic papers to consultancy report for policy makers, weakness in empirical research methods, lack of experience in trans-disciplinary collaboration. The authors finally put forth implications for think tank construction in China and abroad.

Keywords: education policy-making, educational research, educational think tank, higher institution

Procedia PDF Downloads 154
16610 A Phenomenological Approach to Computational Modeling of Analogy

Authors: José Eduardo García-Mendiola

Abstract:

In this work, a phenomenological approach to computational modeling of analogy processing is carried out. The paper goes through the consideration of the structure of the analogy, based on the possibility of sustaining the genesis of its elements regarding Husserl's genetic theory of association. Among particular processes which take place in order to get analogical inferences, there is one which arises crucial for enabling efficient base cases retrieval through long-term memory, namely analogical transference grounded on familiarity. In general, it has been argued that analogical reasoning is a way by which a conscious agent tries to determine or define a certain scope of objects and relationships between them using previous knowledge of other familiar domain of objects and relations. However, looking for a complete description of analogy process, a deeper consideration of phenomenological nature is required in so far, its simulation by computational programs is aimed. Also, one would get an idea of how complex it would be to have a fully computational account of the analogy elements. In fact, familiarity is not a result of a mere chain of repetitions of objects or events but generated insofar as the object/attribute or event in question is integrable inside a certain context that is taking shape as functionalities and functional approaches or perspectives of the object are being defined. Its familiarity is generated not by the identification of its parts or objective determinations as if they were isolated from those functionalities and approaches. Rather, at the core of such a familiarity between entities of different kinds lays the way they are functionally encoded. So, and hoping to make deeper inroads towards these topics, this essay allows us to consider that cognitive-computational perspectives can visualize, from the phenomenological projection of the analogy process reviewing achievements already obtained as well as exploration of new theoretical-experimental configurations towards implementation of analogy models in specific as well as in general purpose machines.

Keywords: analogy, association, encoding, retrieval

Procedia PDF Downloads 113
16609 The Influence of Aerobic Physical Exercise with Different Frequency to Concentration of Vascular Endothelial Growth Factor in Brain Tissue of Wistar Rat

Authors: Rostika Flora, Muhammad Zulkarnain, Syokumawena

Abstract:

Background: Aerobic physical exercises are recommended to keep body fit and healthy although physical exercises themselves can increase body metabolism and oxygen and can lead into tissue hypoxia. Oxygen pressure can serve as Vascular Endhothelial Growth Factor (VEGF) regulator. Hypoxia increases gene expression of VEGF through ascendant regulation of HIF-1. VEGF is involved in regulating angiogenesis process. Aerobic physical exercises can increase the concentration of VEGF in brain and enables angiogenesis process. We have investigated the influence of aerobic physical exercise to the VGEF concentration of wistar rat’s brain. Methods: This was experimental study using post test only control group design. Independent t-test was used as statistical test. The samples were twenty four wistar rat (Rattus Norvegicus) which were divided into four groups: group P1 (control group), group P2 (treatment group with once-a-week exercise), group P3 (treatment group with three time-a-week exercise), and group P4 (treatment group with seven time-a-week exercise). Group P2, P3, and P4 were treated with treadmil with speed of 20 m/minute for 30 minutes. The concentration of VEGF was determined by ELISA. Results: There was a significant increase of VEGF in treatment group compared with control one (<0.05). The maximum increase was found in group P2 (129.02±64.49) and the minimum increase was in group P4 (96.98±11.20). Conclusion: The frequency of aerobic physical exercises influenced the concentration of Vascular Endhothelial Growth Factor (VEGF) of brain tissue of Rattus Norvegicus.

Keywords: brain tissue, hypoxia, physical exercises, vascular endhothelial growth factor

Procedia PDF Downloads 484
16608 Reliability of Dissimilar Metal Soldered Joint in Fabrication of Electromagnetic Interference Shielded Door Frame

Authors: Rehan Waheed, Hasan Aftab Saeed, Wasim Tarar, Khalid Mahmood, Sajid Ullah Butt

Abstract:

Electromagnetic Interference (EMI) shielded doors made from brass extruded channels need to be welded with shielded enclosures to attain optimum shielding performance. Control of welding induced distortion is a problem in welding dissimilar metals like steel and brass. In this research, soldering of the steel-brass joint has been proposed to avoid weld distortion. The material used for brass channel is UNS C36000. The thickness of brass is defined by the manufacturing process, i.e. extrusion. The thickness of shielded enclosure material (ASTM A36) can be varied to produce joint between the dissimilar metals. Steel sections of different gauges are soldered using (91% tin, 9% zinc) solder to the brass, and strength of joint is measured by standard test procedures. It is observed that thin steel sheets produce a stronger bond with brass. The steel sections further require to be welded with shielded enclosure steel sheets through TIG welding process. Stresses and deformation in the vicinity of soldered portion is calculated through FE simulation. Crack formation in soldered area is also studied through experimental work. It has been found that in thin sheets deformation produced due to applied force is localized and has no effect on soldered joint area whereas in thick sheets profound cracks have been observed in soldered joint. The shielding effectiveness of EMI shielded door is compromised due to these cracks. The shielding effectiveness of the specimens is tested and results are compared.

Keywords: dissimilar metal, EMI shielding, joint strength, soldering

Procedia PDF Downloads 158
16607 Simulating Elevated Rapid Transit System for Performance Analysis

Authors: Ran Etgar, Yuval Cohen, Erel Avineri

Abstract:

One of the major challenges of transportation in medium sized inner-cities (such as Tel-Aviv) is the last-mile solution. Personal rapid transit (PRT) seems like an applicable candidate for this, as it combines the benefits of personal (car) travel with the operational benefits of transit. However, the investment required for large area PRT grid is significant and there is a need to economically justify such investment by correctly evaluating the grid capacity. PRT main elements are small automated vehicles (sometimes referred to as podcars) operating on a network of specially built guideways. The research is looking at a specific concept of elevated PRT system. Literature review has revealed the drawbacks PRT modelling and simulation approaches, mainly due to the lack of consideration of technical and operational features of the system (such as headways, acceleration, safety issues); the detailed design of infrastructure (guideways, stations, and docks); the stochastic and sessional characteristics of demand; and safety regulations – all of them have a strong effect on the system performance. A highly detailed model of the system, developed in this research, is applying a discrete event simulation combined with an agent-based approach, to represent the system elements and the podecars movement logic. Applying a case study approach, the simulation model is used to study the capacity of the system, the expected throughput of the system, the utilization, and the level of service (journey time, waiting time, etc.).

Keywords: capacity, productivity measurement, PRT, simulation, transportation

Procedia PDF Downloads 159
16606 The Intersection of Art and Technology: Innovations in Visual Communication Design

Authors: Sareh Enjavi

Abstract:

In recent years, the field of visual communication design has seen a significant shift in the way that art is created and consumed, with the advent of new technologies like virtual reality, augmented reality, and artificial intelligence. This paper explores the ways in which technology is changing the landscape of visual communication design, and how designers are incorporating new technological tools into their artistic practices. The primary objective of this research paper is to investigate the ways in which technology is influencing the creative process of designers and artists in the field of visual communication design. The paper also aims to examine the challenges and limitations that arise from the intersection of art and technology in visual communication design, and to identify strategies for overcoming these challenges. Drawing on examples from a range of fields, including advertising, fine art, and digital media, this paper highlights the exciting innovations that are emerging as artists and designers use technology to push the boundaries of traditional artistic expression. The paper argues that embracing technological innovation is essential for the continued evolution of visual communication design. By exploring the intersection of art and technology, designers can create new and exciting visual experiences that engage and inspire audiences in new ways. The research also contributes to the theoretical and methodological understanding of the intersection of art and technology, a topic that has gained significant attention in recent years. Ultimately, this paper emphasizes the importance of embracing innovation and experimentation in the field of visual communication design, and highlights the exciting innovations that are emerging as a result of the intersection of art and technology, and emphasizes the importance of embracing innovation and experimentation in the field of visual communication design.

Keywords: visual communication design, art and technology, virtual reality, interactive art, creative process

Procedia PDF Downloads 103
16605 Capillary Wave Motion and Atomization Induced by Surface Acoustic Waves under the Navier-Slip Condition at the Wall

Authors: Jaime E. Munoz, Jose C. Arcos, Oscar E. Bautista, Ivan E. Campos

Abstract:

The influence of slippage phenomenon over the destabilization and atomization mechanisms induced via surface acoustic waves on a Newtonian, millimeter-sized, drop deposited on a hydrophilic substrate is studied theoretically. By implementing the Navier-slip model and a lubrication-type approach into the equations which govern the dynamic response of a drop exposed to acoustic stress, a highly nonlinear evolution equation for the air-liquid interface is derived in terms of the acoustic capillary number and the slip coefficient. By numerically solving such an evolution equation, the Spatio-temporal deformation of the drop's free surface is obtained; in this context, atomization of the initial drop into micron-sized droplets is predicted at our numerical model once the acoustically-driven capillary waves reach a critical value: the instability length. Our results show slippage phenomenon at systems with partial and complete wetting favors the formation of capillary waves at the free surface, which traduces in a major volume of liquid being atomized in comparison to the no-slip case for a given time interval. In consequence, slippage at the wall possesses the capability to affect and improve the atomization rate for a drop exposed to a high-frequency acoustic field.

Keywords: capillary instability, lubrication theory, navier-slip condition, SAW atomization

Procedia PDF Downloads 149
16604 The Study of Formal and Semantic Errors of Lexis by Persian EFL Learners

Authors: Mohammad J. Rezai, Fereshteh Davarpanah

Abstract:

Producing a text in a language which is not one’s mother tongue can be a demanding task for language learners. Examining lexical errors committed by EFL learners is a challenging area of investigation which can shed light on the process of second language acquisition. Despite the considerable number of investigations into grammatical errors, few studies have tackled formal and semantic errors of lexis committed by EFL learners. The current study aimed at examining Persian learners’ formal and semantic errors of lexis in English. To this end, 60 students at three different proficiency levels were asked to write on 10 different topics in 10 separate sessions. Finally, 600 essays written by Persian EFL learners were collected, acting as the corpus of the study. An error taxonomy comprising formal and semantic errors was selected to analyze the corpus. The formal category covered misselection and misformation errors, while the semantic errors were classified into lexical, collocational and lexicogrammatical categories. Each category was further classified into subcategories depending on the identified errors. The results showed that there were 2583 errors in the corpus of 9600 words, among which, 2030 formal errors and 553 semantic errors were identified. The most frequent errors in the corpus included formal error commitment (78.6%), which were more prevalent at the advanced level (42.4%). The semantic errors (21.4%) were more frequent at the low intermediate level (40.5%). Among formal errors of lexis, the highest number of errors was devoted to misformation errors (98%), while misselection errors constituted 2% of the errors. Additionally, no significant differences were observed among the three semantic error subcategories, namely collocational, lexical choice and lexicogrammatical. The results of the study can shed light on the challenges faced by EFL learners in the second language acquisition process.

Keywords: collocational errors, lexical errors, Persian EFL learners, semantic errors

Procedia PDF Downloads 135
16603 In vitro and in vivo Infectivity of Coxiella burnetii Strains from French Livestock

Authors: Joulié Aurélien, Jourdain Elsa, Bailly Xavier, Gasqui Patrick, Yang Elise, Leblond Agnès, Rousset Elodie, Sidi-Boumedine Karim

Abstract:

Q fever is a worldwide zoonosis caused by the gram-negative obligate intracellular bacterium Coxiella burnetii. Following the recent outbreaks in the Netherlands, a hyper virulent clone was found to be the cause of severe human cases of Q fever. In livestock, Q fever clinical manifestations are mainly abortions. Although the abortion rates differ between ruminant species, C. burnetii’s virulence remains understudied, especially in enzootic areas. In this study, the infectious potential of three C. burnetii isolates collected from French farms of small ruminants were compared to the reference strain Nine Mile (in phase II and in an intermediate phase) using an in vivo (CD1 mice) model. Mice were challenged with 105 live bacteria discriminated by propidium monoazide-qPCR targeting the icd-gene. After footpad inoculation, spleen and popliteal lymph node were harvested at 10 days post-inoculation (p.i). The strain invasiveness in spleen and popliteal nodes was assessed by qPCR assays targeting the icd-gene. Preliminary results showed that the avirulent strains (in phase 2) failed to pass the popliteal barrier and then to colonize the spleen. This model allowed a significant differentiation between strain’s invasiveness on biological host and therefore identifying distinct virulence profiles. In view of these results, we plan to go further by testing fifteen additional C. burnetii isolates from French farms of sheep, goat and cattle by using the above-mentioned in vivo model. All 15 strains display distant MLVA (multiple-locus variable-number of tandem repeat analysis) genotypic profiles. Five of the fifteen isolates will bee also tested in vitro on ovine and bovine macrophage cells. Cells and supernatants will be harvested at day1, day2, day3 and day6 p.i to assess in vitro multiplication kinetics of strains. In conclusion, our findings might help the implementation of surveillance of virulent strains and ultimately allow adapting prophylaxis measures in livestock farms.

Keywords: Q fever, invasiveness, ruminant, virulence

Procedia PDF Downloads 356
16602 Teacher Agency in Localizing Textbooks for International Chinese Language Teaching: A Case of Minsk State Linguistic University

Authors: Min Bao

Abstract:

The teacher is at the core of the three fundamental factors in international Chinese language teaching, the other two being the textbook and the method. Professional development of the teacher comprises a self-renewing process that is characterized by knowledge impartment and self-reflection, in which individual agency plays a significant role. Agency makes a positive contribution to teachers’ teaching practice and their life-long learning. This study, taking Chinese teaching and learning in Minsk State Linguistic University of Belarus as an example, attempts to understand agency by investigating the teacher’s strategic adaptation of textbooks to meet local needs. Firstly, through in-depth interviews, teachers’ comments on textbooks are collected and analyzed to disclose their strategies of adapting and localizing textbooks. Then, drawing on the theory of 'The chordal triad of agency', the paper reveals the process in which teacher agency is exercised as well as its rationale. The results verify the theory, that is, given its temporal relationality, teacher agency is constructed through a combination of experiences, purposes and aims, and context, i.e., projectivity, iteration and practice-evaluation as mentioned in the theory. Evidence also suggests that the three dimensions effect differently; It is usually one or two dimensions that are of greater effects on the construction of teacher agency. Finally, the paper provides four specific insights to teacher development in international Chinese language teaching: 1) when recruiting teachers, priority be given on candidates majoring in Chinese language or international Chinese language teaching; 2) measures be taken to assure educational quality of the two said majors at various levels; 3) pre-service teacher training program be tailored for improved quality, and 4) management of overseas Confucius Institutions be enhanced.

Keywords: international Chinese language teaching, teacher agency, textbooks, localization

Procedia PDF Downloads 153
16601 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks

Authors: Mazarine Roquet, Pierre Dewallef

Abstract:

The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.

Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating

Procedia PDF Downloads 68