Search results for: minimum sample size
1785 Representation of History in Cinema: Comparative Analysis of Turkish Films Based on the Conquest of Istanbul
Authors: Dilara Balcı Gulpinar
Abstract:
History, which can be defined as the narrative of the past, is a process of reproduction that takes place in current time. Scientificness of historiography is controversial for reasons such as the fact that the historian makes choices and comments; even the reason for choosing the subject distracts him/her from objectivity. Historians may take advantage of the current values, cannot be able to afford to contradict society and/or face pressures of dominant groups. In addition, due to the lack of documentation, interpretation, and fiction are used to integrate historical events that seem disconnected. In this respect, there are views that relate history to narrative arts rather than positive sciences. Popular historical films, which are visual historical representations, appeal to wider audiences by taking advantage of visuality, dramatic fictional narrative, various effects, music, stars, and other populist elements. Historical film, which does not claim to be scientific and even has the freedom to distort historical reality, can be perceived as reality itself and becomes an indispensable resource for individual and social memory. The ideological discourse of popular films is not only impressive and manipulative but also changeable. Socio-cultural and political changes can transform the representation of history in films extremely sharply and rapidly. In accordance with the above-mentioned hypothesis, this study is aimed at examining Turkish historical films about the conquest of Istanbul, using methods of historical and social analysis. İstanbul’un Fethi (Conquest of Istanbul, Aydin Arakon, 1953), Kuşatma Altında Aşk (Love Under Siege, Ersin Pertan, 1997) and Fetih 1453 (Conquest 1453, Faruk Aksoy, 2012) are the only three films in Turkish cinema that revolve around the said conquest, therefore constituting the sample of this study. It has been determined that real and fictional events, as well as characters, both focused and ignored, differ from one another in each film. Such significant differences in the dramatic and cinematographic structure of these three films shot respectively in the 50s, 90s, and 2010s show that the representation of history in popular cinema has altered throughout the years, losing its aspect of objectivity.Keywords: cinema, conquest of Istanbul, historical film, representation
Procedia PDF Downloads 1401784 Evaluating the Needs of PhD Students in Preparation of a Genre-Based English for Academic Purposes Course
Authors: Heba I. Bakry
Abstract:
Academic writing in the tertiary education has always been a challenge to EFL learners. This proposed study aims at investigating the academic English language needs for PhD students and candidates studying humanities and social sciences at Cairo University. The research problem arises from the fact that most of them study English as a Foreign Language (EFL) or for specific purposes (ESP) in their undergraduate years. They are hardly familiarized with the different academic genres, despite the fact that they use academic resources written in English, and they are required to publish a paper internationally. Upon understanding the conventions and constraints of academic writing, postgraduates will have the opportunity to interact with the international academic spheres conveniently. There is, thus, a need to be acquainted with the generally accepted features of the academic genres, such as academic papers and their part-genres, such as writing abstracts, in addition to other occluded genres, such as personal statements and recommendation letters. The lack of practicing many of these genres is caused by the fact that there are clear differences between the rhetoric and conventions of the students' native language, i.e., Arabic, and the target language they are learning in the academic context, i.e., English. Moreover, apart from the general culture represented ethno-linguistically, the learners' 'small' culture represented in a national setting like Cairo University is more defining than their general cultural affiliations that are associated with their nationality, race, or religion, for instance. The main research question of this proposed study is: What is the effect of teaching a genre-based EAP course on the research writing competence of PhD candidates? To reach an answer to this question, the study will attempt to answer the following sub-questions: 1. What are the Egyptian PhD candidates' EAP perceived needs? 2. What are the requisite academic research skills for Egyptian scholars? The study intends to assess the students’ needs, as a step to design and evaluate an EAP course that is based on explaining and scrutinizing a variety of academic genres. Adopting a diagnostic approach, the needs assessment uses quantitative data collected through questionnaires, and qualitative data assembled from semi-structured interviews with the students and their teachers, in addition to non-participant observations of a convenience sample.Keywords: course design, English for academic purposes, genre-based, needs assessment
Procedia PDF Downloads 2481783 Changes in the fecal Microbiome of Periparturient Dairy Cattle and Associations with the Onset of Salmonella Shedding
Authors: Lohendy Munoz-Vargas, Stephen O. Opiyo, Rose Digianantonio, Michele L. Williams, Asela Wijeratne, Gregory Habing
Abstract:
Non-typhoidal Salmonella enterica is a zoonotic pathogen with critical importance in animal and public health. The persistence of Salmonella on farms affects animal productivity and health, and represents a risk for food safety. The intestinal microbiota plays a fundamental role in the colonization and invasion of this ubiquitous microorganism. To overcome the colonization resistance imparted by the gut microbiome, Salmonella uses invasion strategies and the host inflammatory response to survive, proliferate, and establish infections with diverse clinical manifestations. Cattle serve as reservoirs of Salmonella, and periparturient cows have high prevalence of Salmonella shedding; however, to author`s best knowledge, little is known about the association between the gut microbiome and the onset of Salmonella shedding during the periparturient period. Thus, the objective of this study was to assess the association between changes in bacterial communities and the onset of Salmonella shedding in cattle approaching parturition. In a prospective cohort study, fecal samples from 98 dairy cows originating from four different farms were collected at four time points relative to calving (-3 wks, -1 wk, +1 wk, +3 wks). All 392 samples were cultured for Salmonella. Sequencing of the V4 region of the 16S rRNA gene using the Illumina platform was completed to evaluate the fecal microbiome in a selected sample subset. Analyses of microbial composition, diversity, and structure were performed according to time points, farm, and Salmonella onset status. Individual cow fecal microbiomes, predominated by Bacteroidetes, Firmicutes, Spirochaetes, and Proteobacteria phyla, significantly changed before and after parturition. Microbial communities from different farms were distinguishable based on multivariate analysis. Although there were significant differences in some bacterial taxa between Salmonella positive and negative samples, our results did not identify differences in the fecal microbial diversity or structure for cows with and without the onset of Salmonella shedding. These data suggest that determinants other than the significant changes in the fecal microbiome influence the periparturient onset of Salmonella shedding in dairy cattle.Keywords: dairy cattle, microbiome, periparturient, Salmonella
Procedia PDF Downloads 1771782 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System
Authors: Dong Seop Lee, Byung Sik Kim
Abstract:
In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.Keywords: disaster information management, unstructured data, optical character recognition, machine learning
Procedia PDF Downloads 1341781 Choosing the Green Energy Option: A Willingness to Pay Study of Metro Manila Residents for Solar Renewable Energy
Authors: Paolo Magnata
Abstract:
The energy market in the Philippines remains to have one of the highest electricity rates in the region averaging at US$0.16/kWh (PHP6.89/kWh), excluding VAT, as opposed to the overall energy market average of US$0.13/kWh. The movement towards renewable energy, specifically solar energy, will pose as an expensive one with the country’s energy sector providing Feed-in-Tariff rates as high as US$0.17/kWh (PHP8.69/kWh) for solar energy power plants. Increasing the share of renewables at the current state of the energy regulatory background would yield a three-fold increase in residential electricity bills. The issue lies in the uniform charge that consumers bear regardless of where the electricity is sourced resulting in rates that only consider costs and not the consumers. But if they are given the option to choose where their electricity comes from, a number of consumers may potentially choose economically costlier sources of electricity due to higher levels of utility coupled with the willingness to pay of consuming environmentally-friendly sourced electricity. A contingent valuation survey was conducted to determine their willingness-to-pay for solar energy on a sample that was representative of Metro Manila to elicit their willingness-to-pay and a Single Bounded Dichotomous Choice and Double Bounded Dichotomous Choice analysis was used to estimate the amount they were willing to pay. The results showed that Metro Manila residents are willing to pay a premium on top of their current electricity bill amounting to US$5.71 (PHP268.42) – US$9.26 (PHP435.37) per month which is approximately 0.97% - 1.29% of their monthly household income. It was also discovered that besides higher income of households, a higher level of self-perceived knowledge on environmental awareness significantly affected the likelihood of a consumer to pay the premium. Shifting towards renewable energy is an expensive move not only for the government because of high capital investment but also to consumers; however, the Green Energy Option (a policy mechanism which gives consumers the option to decide where their electricity comes from) can potentially balance the shift of the economic burden by transitioning from a uniformly charged electricity rate to equitably charging consumers based on their willingness to pay for renewably sourced energy.Keywords: contingent valuation, dichotomous choice, Philippines, solar energy
Procedia PDF Downloads 3471780 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping
Authors: Masato Saeki
Abstract:
Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level
Procedia PDF Downloads 4591779 Assessment Literacy Levels of Mathematics Teachers to Implement Classroom Assessment in Ghanaian High Schools
Authors: Peter Akayuure
Abstract:
One key determinant of the quality of mathematics learning is the teacher’s ability to assess students adequately and effectively and make assessment an integral part of the instructional practices. If the mathematics teacher lacks the required literacy to perform classroom assessment roles, the true trajectory of learning success and attainment of curriculum expectations might be indeterminate. It is therefore important that educators and policymakers understand and seek ways to improve the literacy level of mathematics teachers to implement classroom assessments that would meet curriculum demands. This study employed a descriptive survey design to explore perceived levels of assessment literacy of mathematics teachers to implement classroom assessment with the school based assessment framework in Ghana. A 25-item classroom assessment inventory on teachers’ assessment scenarios was adopted, modified, and administered to a purposive sample of 48 mathematics teachers from eleven Senior High Schools. Seven other items were included to further collect data on their self-efficacy towards assessment literacy. Data were analyzed using descriptive and bivariate correlation statistics. The result shows that, on average, 48.6% of the mathematics teachers attained standard levels of assessment literacy. Specifically, 50.0% met standard one in choosing appropriate assessment methods, 68.3% reached standard two in developing appropriate assessment tasks, 36.6% reached standard three in administering, scoring, and interpreting assessment results, 58.3% reached standard four in making appropriate assessment decisions, 41.7% reached standard five in developing valid grading procedures, 45.8% reached standard six in communicating assessment results, and 36.2 % reached standard seven by identifying unethical, illegal and inappropriate use of assessment results. Participants rated their self-efficacy belief in performing assessments high, making the relationships between participants’ assessment literacy scores and self-efficacy scores weak and statistically insignificant. The study recommends that institutions training mathematics teachers or providing professional developments should accentuate assessment literacy development to ensure standard assessment practices and quality instruction in mathematics education at senior high schools.Keywords: assessment literacy, mathematics teacher, senior high schools, Ghana
Procedia PDF Downloads 1371778 Links between Moral Distress of Registered Nurses and Factors Related to Patient Care at the End of Their Life: A Cross Sectional Survey
Authors: L. Laurs, A. Blazeviciene, D. Milonas
Abstract:
Introduction: Nursing as a profession is grounded in moral obligation. Nursing practice is grounded in ethical standards: to not harm, to promote justice, to be accountable, and to provide safe and competent care. The nature of the nurse-patient therapeutic relationship requires acting on the patient's behalf. Moral distress consists of negative stress symptoms that occur in situations that involve ethical situations that the nurse perceives as discordant with their professional values. Aim of the Study: The purpose of this study was to assess links between moral distress of registered nurses and factors related to patient care at the end of their life. Methods and Sample: A descriptive, cross-sectional, correlational design was applied in this study. Registered nurses were recruited from seven municipal multi-profile hospitals providing both general and specialized healthcare services in Lithuania (N=1055). Research instruments included two questionnaires: Obstacles and Facilitating at the End of Life Care and Moral Distress Scale (revised). Results: Spearman’s correlation analysis was performed to assess the relationship between nurses' attitudes towards patient care at the end of life and the experienced moral distress. A statistically significant correlation between moral distress and the following factors related to patient end-of-life care has been identified: conversations with physicians on patient end-of-life problems have a positive impact on job satisfaction; some patients may be excluded from decisions about their treatment and nursing because they are questioned about their ability to assess the situation. These situations increased moral distress. Patient consciousness should not be permanently suppressed by calming medications, and the patient should be provided with all nursing care services and moral distress. Conclusions: The moral distress of nurses is significantly related to the end-of-life care of patients and their determinants: moral distress increased due to lack of discussion with doctors about problem-solving and exclusion of patients from decision-making. And it diminished by refusing calming medications to permanently suppress a patient's consciousness and providing good care for patients.Keywords: moral distress, registered nurses, end of life, care
Procedia PDF Downloads 1141777 Ultrasound-Mediated Separation of Ethanol, Methanol, and Butanol from Their Aqueous Solutions
Authors: Ozan Kahraman, Hao Feng
Abstract:
Ultrasonic atomization (UA) is a useful technique for producing a liquid spray for various processes, such as spray drying. Ultrasound generates small droplets (a few microns in diameter) by disintegration of the liquid via cavitation and/or capillary waves, with low range velocity and narrow droplet size distribution. In recent years, UA has been investigated as an alternative for enabling or enhancing ultrasound-mediated unit operations, such as evaporation, separation, and purification. The previous studies on the UA separation of a solvent from a bulk solution were limited to ethanol-water systems. More investigations into ultrasound-mediated separation for other liquid systems are needed to elucidate the separation mechanism. This study was undertaken to investigate the effects of the operational parameters on the ultrasound-mediated separation of three miscible liquid pairs: ethanol-, methanol-, and butanol-water. A 2.4 MHz ultrasonic mister with a diameter of 18 mm and rating power of 24 W was installed on the bottom of a custom-designed cylindrical separation unit. Air was supplied to the unit (3 to 4 L/min.) as a carrier gas to collect the mist. The effects of the initial alcohol concentration, viscosity, and temperature (10, 30 and 50°C) on the atomization rates were evaluated. The alcohol concentration in the collected mist was measured with high performance liquid chromatography and a refractometer. The viscosity of the solutions was determined using a Brookfield digital viscometer. The alcohol concentration of the atomized mist was dependent on the feed concentration, feed rate, viscosity, and temperature. Increasing the temperature of the alcohol-water mixtures from 10 to 50°C increased the vapor pressure of both the alcohols and water, resulting in an increase in the atomization rates but a decrease in the separation efficiency. The alcohol concentration in the mist was higher than that of the alcohol-water equilibrium at all three temperatures. More importantly, for ethanol, the ethanol concentration in the mist went beyond the azeotropic point, which cannot be achieved by conventional distillation. Ultrasound-mediated separation is a promising non-equilibrium method for separating and purifying alcohols, which may result in significant energy reductions and process intensification.Keywords: azeotropic mixtures, distillation, evaporation, purification, seperation, ultrasonic atomization
Procedia PDF Downloads 1831776 Prevalence of Seropositivity for Cytomegalovirus in Patients with Hereditary Bleeding Diseases in West Azerbaijan of Iran
Authors: Zakieh Rostamzadeh, Zahra Shirmohammadi
Abstract:
Human cytomegalovirus is a species of the cytomegalovirus family of viruses, which in turn is a member of the viral family known as herpesviridae or herpesviruses. Although they may be found throughout the body, HCMV infections are frequently associated with the salivary glands. HCMV infection is typically unnoticed in healthy people, but can be life-threatening for the immunocompromised such as HIV-infected persons, organ transplant recipients, or newborn infants. After infection, HCMV has an ability to remain latent within the body over long periods. Cytomegalovirus (CMV) causes infection in immunocompromised, hemophilia patients and those who received blood transfusion frequently. This study aimed at determining the prevalence of cytomegalovirus (CMV) antibodies in hemophilia patients. Materials and Methods: A retrospective observational study was carried out in Urmia, North West of Iran. The study population comprised a sample of 50 hemophilic patients born after 1985 and have received blood factors in West Azerbaijan. The exclusion criteria include: drug abusing, high risk sexual contacts, vertical transmission of mother to fetus and suspicious needling. All samples were evaluated with the method of ELISA, with a certain kind of kit and by a certain laboratory. Results: Fifty hemophiliacs from 250 patients registered with Urmia Hemophilia Society were enrolled in the study including 43 (86%) male, and 7 (14%) female. The mean age of patients was 10.3 years, range 3 to 25 years. None of patients had risk factors mentioned above. Among our studied population, 34(68%) had hemophilia A, 1 (2%) hemophilia B, 8 (16%) VWF, 3(6%) factor VII deficiency, 1 (2%) factor V deficiency, 1 (2%) factor X deficiency, 1 (2%). Sera of 50 Hemodialysis patients were investigated for CMV-specific immunoglobulin G (IgG) and IgM. % 91.89 patients were anti-CMV IgG positive and %40.54 was seropositive for anti-CMV IgM. 37.8% patient had serological evidence of reactivation and 2.7% of patients had the primary infection. Discussion: There was no relationship between the antibody titer and: drug abusing, high risk sexual contacts, vertical transmission of mother to fetus and suspicious needling.Keywords: bioinformatics, biomedicine, cytomegalovirus, immunocompromise
Procedia PDF Downloads 3601775 Optimization of Waste Plastic to Fuel Oil Plants' Deployment Using Mixed Integer Programming
Authors: David Muyise
Abstract:
Mixed Integer Programming (MIP) is an approach that involves the optimization of a range of decision variables in order to minimize or maximize a particular objective function. The main objective of this study was to apply the MIP approach to optimize the deployment of waste plastic to fuel oil processing plants in Uganda. The processing plants are meant to reduce plastic pollution by pyrolyzing the waste plastic into a cleaner fuel that can be used to power diesel/paraffin engines, so as (1) to reduce the negative environmental impacts associated with plastic pollution and also (2) to curb down the energy gap by utilizing the fuel oil. A programming model was established and tested in two case study applications that are, small-scale applications in rural towns and large-scale deployment across major cities in the country. In order to design the supply chain, optimal decisions on the types of waste plastic to be processed, size, location and number of plants, and downstream fuel applications were concurrently made based on the payback period, investor requirements for capital cost and production cost of fuel and electricity. The model comprises qualitative data gathered from waste plastic pickers at landfills and potential investors, and quantitative data obtained from primary research. It was found out from the study that a distributed system is suitable for small rural towns, whereas a decentralized system is only suitable for big cities. Small towns of Kalagi, Mukono, Ishaka, and Jinja were found to be the ideal locations for the deployment of distributed processing systems, whereas Kampala, Mbarara, and Gulu cities were found to be the ideal locations initially utilize the decentralized pyrolysis technology system. We conclude that the model findings will be most important to investors, engineers, plant developers, and municipalities interested in waste plastic to fuel processing in Uganda and elsewhere in developing economy.Keywords: mixed integer programming, fuel oil plants, optimisation of waste plastics, plastic pollution, pyrolyzing
Procedia PDF Downloads 1341774 Thermoelectric Cooler As A Heat Transfer Device For Thermal Conductivity Test
Authors: Abdul Murad Zainal Abidin, Azahar Mohd, Nor Idayu Arifin, Siti Nor Azila Khalid, Mohd Julzaha Zahari Mohamad Yusof
Abstract:
A thermoelectric cooler (TEC) is an electronic component that uses ‘peltier’ effect to create a temperature difference by transferring heat between two electrical junctions of two different types of materials. TEC can also be used for heating by reversing the electric current flow and even power generation. A heat flow meter (HFM) is an equipment for measuring thermal conductivity of building materials. During the test, water is used as heat transfer medium to cool the HFM. The existing re-circulating cooler in the market is very costly, and the alternative is to use piped tap water to extract heat from HFM. However, the tap water temperature is insufficiently low to enable heat transfer to take place. The operating temperature for isothermal plates in the HFM is 40°C with the range of ±0.02°C. When the temperature exceeds the operating range, the HFM stops working, and the test cannot be conducted. The aim of the research is to develop a low-cost but energy-efficient TEC prototype that enables heat transfer without compromising the function of the HFM. The objectives of the research are a) to identify potential of TEC as a cooling device by evaluating its cooling rate and b) to determine the amount of water savings using TEC compared to normal tap water. Four (4) peltier sets were used, with two (2) sets used as pre-cooler. The cooling water is re-circulated from the reservoir into HFM using a water pump. The thermal conductivity readings, the water flow rate, and the power consumption were measured while the HFM was operating. The measured data has shown decrease in average cooling temperature difference (ΔTave) of 2.42°C and average cooling rate of 0.031°C/min. The water savings accrued from using the TEC is projected to be 8,332.8 litres/year with the application of water re-circulation. The results suggest the prototype has achieved required objectives. Further research will include comparing the cooling rate of TEC prototype against conventional tap water and to optimize its design and performance in terms of size and portability. The possible application of the prototype could also be expanded to portable storage for medicine and beverages.Keywords: energy efficiency, thermoelectric cooling, pre-cooling device, heat flow meter, sustainable technology, thermal conductivity
Procedia PDF Downloads 1581773 Academic Staff’s Perception and Willingness to Participate in Collaborative Research: Implication for Development in Sub-Saharan Africa
Authors: Ademola Ibukunolu Atanda
Abstract:
Research undertakings are meant to proffer solutions to issues and challenges in society. This justifies the need for research in ivory towers. Multinational and non-governmental organisations, as well as foundations, commit financial resources to support research endeavours. In recent times, the direction and dimension of research undertaking encourage collaborations, whereby experts from different disciplines or specializations would bring their expertise in addressing any identified problem, whether in humanities or sciences. However, the extent to which collaborative research undertakings are perceived and embraced by academic staff would determine the impact collaborative research would have on society. To this end, this study investigated academic staff’s perception and willingness to be involved in collaborative research for the purpose of proffering solutions to societal problems. The study adopted a descriptive research design. The population comprised academic staff in southern Nigeria. The sample was drawn through a convenient sampling technique. The data were collected using a questionnaire titled “Perception and Willingness to Participate in Collaborative Research Questionnaire (PWPCRQ)’ using Google Forms. Data collected were analyzed using descriptive statistics of simple percentages, mean and charts. The findings showed that Academic Staff’s readiness to participate in collaborative research is to a great extent (89%) and they participate in collaborative research very often (51%). The Academic Staff was involved more in collaboration research among their colleagues within their universities (1.98) than participation in inter-disciplines collaboration (1.47) with their colleagues outside Nigeria. Collaborative research was perceived to impact on development (2.5). Collaborative research offers the following benefits to members’ aggregation of views, the building of an extensive network of contacts, enhancement of sharing of skills, facilitation of tackling complex problems, increased visibility of research network and citations and promotion of funding opportunities. The study concluded that Academic staff in universities in the South-West of Nigeria participate in collaborative research but with their colleagues within Nigeria rather than outside the country. Based on the findings, it was recommended that the management of universities in South-West Nigeria should encourage collaborative research with some incentives.Keywords: collaboration, research, development, participation
Procedia PDF Downloads 671772 Study of the Transport of ²²⁶Ra Colloidal in Mining Context Using a Multi-Disciplinary Approach
Authors: Marine Reymond, Michael Descostes, Marie Muguet, Clemence Besancon, Martine Leermakers, Catherine Beaucaire, Sophie Billon, Patricia Patrier
Abstract:
²²⁶Ra is one of the radionuclides resulting from the disintegration of ²³⁸U. Due to its half-life (1600 y) and its high specific activity (3.7 x 1010 Bq/g), ²²⁶Ra is found at the ultra-trace level in the natural environment (usually below 1 Bq/L, i.e. 10-13 mol/L). Because of its decay in ²²²Rn, a radioactive gas with a shorter half-life (3.8 days) which is difficult to control and dangerous for humans when inhaled, ²²⁶Ra is subject to a dedicated monitoring in surface waters especially in the context of uranium mining. In natural waters, radionuclides occur in dissolved, colloidal or particular forms. Due to the size of colloids, generally ranging between 1 nm and 1 µm and their high specific surface areas, the colloidal fraction could be involved in the transport of trace elements, including radionuclides in the environment. The colloidal fraction is not always easy to determine and few existing studies focus on ²²⁶Ra. In the present study, a complete multidisciplinary approach is proposed to assess the colloidal transport of ²²⁶Ra. It includes water sampling by conventional filtration (0.2µm) and the innovative Diffusive Gradient in Thin Films technique to measure the dissolved fraction (<10nm), from which the colloidal fraction could be estimated. Suspended matter in these waters were also sampled and characterized mineralogically by X-Ray Diffraction, infrared spectroscopy and scanning electron microscopy. All of these data, which were acquired on a rehabilitated former uranium mine, allowed to build a geochemical model using the geochemical calculation code PhreeqC to describe, as accurately as possible, the colloidal transport of ²²⁶Ra. Colloidal transport of ²²⁶Ra was found, for some of the sampling points, to account for up to 95% of the total ²²⁶Ra measured in water. Mineralogical characterization and associated geochemical modelling highlight the role of barite, a barium sulfate mineral well known to trap ²²⁶Ra into its structure. Barite was shown to be responsible for the colloidal ²²⁶Ra fraction despite the presence of kaolinite and ferrihydrite, which are also known to retain ²²⁶Ra by sorption.Keywords: colloids, mining context, radium, transport
Procedia PDF Downloads 1651771 Mesoporous Na2Ti3O7 Nanotube-Constructed Materials with Hierarchical Architecture: Synthesis and Properties
Authors: Neumoin Anton Ivanovich, Opra Denis Pavlovich
Abstract:
Materials based on titanium oxide compounds are widely used in such areas as solar energy, photocatalysis, food industry and hygiene products, biomedical technologies, etc. Demand for them has also formed in the battery industry (an example of this is the commercialization of Li4Ti5O12), where much attention has recently been paid to the development of next-generation systems and technologies, such as sodium-ion batteries. This dictates the need to search for new materials with improved characteristics, as well as ways to obtain them that meet the requirements of scalability. One of the ways to solve these problems can be the creation of nanomaterials that often have a complex of physicochemical properties that radically differ from the characteristics of their counterparts in the micro- or macroscopic state. At the same time, it is important to control the texture (specific surface area, porosity) of such materials. In view of the above, among other methods, the hydrothermal technique seems to be suitable, allowing a wide range of control over the conditions of synthesis. In the present study, a method was developed for the preparation of mesoporous nanostructured sodium trititanate (Na2Ti3O7) with a hierarchical architecture. The materials were synthesized by hydrothermal processing and exhibit a complex hierarchically organized two-layer architecture. At the first level of the hierarchy, materials are represented by particles having a roughness surface, and at the second level, by one-dimensional nanotubes. The products were found to have high specific surface area and porosity with a narrow pore size distribution (about 6 nm). As it is known, the specific surface area and porosity are important characteristics of functional materials, which largely determine the possibilities and directions of their practical application. Electrochemical impedance spectroscopy data show that the resulting sodium trititanate has a sufficiently high electrical conductivity. As expected, the synthesized complexly organized nanoarchitecture based on sodium trititanate with a porous structure can be practically in demand, for example, in the field of new generation electrochemical storage and energy conversion devices.Keywords: sodium trititanate, hierarchical materials, mesoporosity, nanotubes, hydrothermal synthesis
Procedia PDF Downloads 1111770 Didactic Suitability and Mathematics Through Robotics and 3D Printing
Authors: Blanco T. F., Fernández-López A.
Abstract:
Nowadays, education, motivated by the new demands of the 21st century, acquires a dimension that converts the skills that new generations may need into a huge and uncertain set of knowledge too broad to be entirety covered. Within this set, and as tools to reach them, we find Learning and Knowledge Technologies (LKT). Thus, in order to prepare students for an everchanging society in which the technological boom involves everything, it is essential to develop digital competence. Nevertheless LKT seems not to have found their place in the educational system. This work is aimed to go a step further in the research of the most appropriate procedures and resources for technological integration in the classroom. The main objective of this exploratory study is to analyze the didactic suitability (epistemic, cognitive, affective, interactional, mediational and ecological) for teaching and learning processes of mathematics with robotics and 3D printing. The analysis carried out is drawn from a STEAM (Science, Technology, Engineering, Art and Mathematics) project that has the Pilgrimage way to Santiago de Compostela as a common thread. The sample is made up of 25 Primary Education students (10 and 11 years old). A qualitative design research methodology has been followed, the sessions have been distributed according to the type of technology applied. Robotics has been focused towards learning two-dimensional mathematical notions while 3D design and printing have been oriented towards three-dimensional concepts. The data collection instruments used are evaluation rubrics, recordings, field notebooks and participant observation. Indicators of didactic suitability proposed by Godino (2013) have been used for the analysis of the data. In general, the results show a medium-high level of didactic suitability. Above these, a high mediational and cognitive suitability stands out, which led to a better understanding of the positions and relationships of three-dimensional bodies in space and the concept of angle. With regard to the other indicators of the didactic suitability, it should be noted that the interactional suitability would require more attention and the affective suitability a deeper study. In conclusion, the research has revealed great expectations around the combination of teaching-learning processes of mathematics and LKT. Although there is still a long way to go in terms of the provision of means and teacher training.Keywords: 3D printing, didactic suitability, educational design, robotics
Procedia PDF Downloads 1091769 Experimental Study of Sand-Silt Mixtures with Torsional and Flexural Resonant Column Tests
Authors: Meghdad Payan, Kostas Senetakis, Arman Khoshghalb, Nasser Khalili
Abstract:
Dynamic properties of soils, especially at the range of very small strains, are of particular interest in geotechnical engineering practice for characterization of the behavior of geo-structures subjected to a variety of stress states. This study reports on the small-strain dynamic properties of sand-silt mixtures with particular emphasis on the effect of non-plastic fines content on the small strain shear modulus (Gmax), Young’s Modulus (Emax), material damping (Ds,min) and Poisson’s Ratio (v). Several clean sands with a wide range of grain size characteristics and particle shape are mixed with variable percentages of a silica non-plastic silt as fines content. Prepared specimens of sand-silt mixtures at different initial void ratios are subjected to sequential torsional and flexural resonant column tests with elastic dynamic properties measured along an isotropic stress path up to 800 kPa. It is shown that while at low percentages of fines content, there is a significant difference between the dynamic properties of the various samples due to the different characteristics of the sand portion of the mixtures, this variance diminishes as the fines content increases and the soil behavior becomes mainly silt-dominant, rendering no significant influence of sand properties on the elastic dynamic parameters. Indeed, beyond a specific portion of fines content, around 20% to 30% typically denoted as threshold fines content, silt is controlling the behavior of the mixture. Using the experimental results, new expressions for the prediction of small-strain dynamic properties of sand-silt mixtures are developed accounting for the percentage of silt and the characteristics of the sand portion. These expressions are general in nature and are capable of evaluating the elastic dynamic properties of sand-silt mixtures with any types of parent sand in the whole range of silt percentage. The inefficiency of skeleton void ratio concept in the estimation of small-strain stiffness of sand-silt mixtures is also illustrated.Keywords: damping ratio, Poisson’s ratio, resonant column, sand-silt mixture, shear modulus, Young’s modulus
Procedia PDF Downloads 2541768 Identification of Candidate Congenital Heart Defects Biomarkers by Applying a Random Forest Approach on DNA Methylation Data
Authors: Kan Yu, Khui Hung Lee, Eben Afrifa-Yamoah, Jing Guo, Katrina Harrison, Jack Goldblatt, Nicholas Pachter, Jitian Xiao, Guicheng Brad Zhang
Abstract:
Background and Significance of the Study: Congenital Heart Defects (CHDs) are the most common malformation at birth and one of the leading causes of infant death. Although the exact etiology remains a significant challenge, epigenetic modifications, such as DNA methylation, are thought to contribute to the pathogenesis of congenital heart defects. At present, no existing DNA methylation biomarkers are used for early detection of CHDs. The existing CHD diagnostic techniques are time-consuming and costly and can only be used to diagnose CHDs after an infant was born. The present study employed a machine learning technique to analyse genome-wide methylation data in children with and without CHDs with the aim to find methylation biomarkers for CHDs. Methods: The Illumina Human Methylation EPIC BeadChip was used to screen the genome‐wide DNA methylation profiles of 24 infants diagnosed with congenital heart defects and 24 healthy infants without congenital heart defects. Primary pre-processing was conducted by using RnBeads and limma packages. The methylation levels of top 600 genes with the lowest p-value were selected and further investigated by using a random forest approach. ROC curves were used to analyse the sensitivity and specificity of each biomarker in both training and test sample sets. The functionalities of selected genes with high sensitivity and specificity were then assessed in molecular processes. Major Findings of the Study: Three genes (MIR663, FGF3, and FAM64A) were identified from both training and validating data by random forests with an average sensitivity and specificity of 85% and 95%. GO analyses for the top 600 genes showed that these putative differentially methylated genes were primarily associated with regulation of lipid metabolic process, protein-containing complex localization, and Notch signalling pathway. The present findings highlight that aberrant DNA methylation may play a significant role in the pathogenesis of congenital heart defects.Keywords: biomarker, congenital heart defects, DNA methylation, random forest
Procedia PDF Downloads 1631767 Spatio-Temporal Analysis of Land Use Change and Green Cover Index
Authors: Poonam Sharma, Ankur Srivastav
Abstract:
Cities are complex and dynamic systems that constitute a significant challenge to urban planning. The increasing size of the built-up area owing to growing population pressure and economic growth have lead to massive Landuse/Landcover change resulted in the loss of natural habitat and thus reducing the green covers in urban areas. Urban environmental quality is influenced by several aspects, including its geographical configuration, the scale, and nature of human activities occurring and environmental impacts generated. Cities have transformed into complex and dynamic systems that constitute a significant challenge to urban planning. Cities and their sustainability are often discussed together as the cities stand confronted with numerous environmental concerns as the world becoming increasingly urbanized, and the cities are situated in the mesh of global networks in multiple senses. A rapid transformed urban setting plays a crucial role to change the green area of natural habitats. To examine the pattern of urban growth and to measure the Landuse/Landcover change in Gurgoan in Haryana, India through the integration of Geospatial technique is attempted in the research paper. Satellite images are used to measure the spatiotemporal changes that have occurred in the land use and land cover resulting into a new cityscape. It has been observed from the analysis that drastically evident changes in land use has occurred with the massive rise in built up areas and the decrease in green cover and therefore causing the sustainability of the city an important area of concern. The massive increase in built-up area has influenced the localised temperatures and heat concentration. To enhance the decision-making process in urban planning, a detailed and real world depiction of these urban spaces is the need of the hour. Monitoring indicators of key processes in land use and economic development are essential for evaluating policy measures.Keywords: cityscape, geospatial techniques, green cover index, urban environmental quality, urban planning
Procedia PDF Downloads 2801766 Interrogating Student-Teachers’ Transformative Learning Role, Resources and Journey Considering Pedagogical Reform in Teacher Education Continuums
Authors: Nji Clement Bang, Rosemary Shafack M., Kum Henry Asei, Yaro Loveline Y
Abstract:
Scholars perceive learner-centered teaching-learning reform as roles and resources in teacher education (TE) and professional outcome with transformative learning (TL) continuum dimensions. But, teaching-learning reform is fast proliferating amidst debilitating stakeholder systemic dichotomies, resources, commitment, resistance and poor quality outcome that necessitate stronger TE and professional continuums. Scholars keep seeking greater understanding of themes in teaching-learning reform, TE and professional outcome as continuums and how policymakers, student-teachers, teacher trainers and local communities concerned with initial TE can promote continuous holistic quality performance. To sustain the debate continuum and answer the overarching question, we use mixed-methods research-design with diverse literature and 409 sample-data. Onset text, interview and questionnaire analyses reveal debilitating teaching-learning reform in TE continuums that need TL revival. Follow-up focus group discussion and teaching considering TL insights reinforce holistic teaching-learning in TE. Therefore, significant increase in diverse prior-experience articulation1; critical reflection-discourse engagement2; teaching-practice interaction3; complex-activity constrain control4 and formative outcome- reintegration5 reinforce teaching-learning in learning-to-teach role-resource pathways and outcomes. Themes reiterate complex teaching-learning in TE programs that suits TL journeys and student-teachers and students cum teachers, workers/citizens become akin, transformative-learners who evolve personal and collective roles-resources towards holistic-lifelong-learning outcomes. The article could assist debate about quality teaching-learning reform through TL dimensions as TE and professional role-resource continuums.Keywords: transformative learning perspectives, teacher education, initial teacher education, learner-centered pedagogical reform, life-long learning
Procedia PDF Downloads 791765 The Incidence of Cardiac Arrhythmias Using Trans-Telephonic, Portable Electrocardiography Recorder, in Out-Patients Faculty of Medicine Ramathibodi Hospital
Authors: Urasri Imsomboon, Sopita Areerob, Kanchaporn Kongchauy, Tuchapong Ngarmukos
Abstract:
Objective: The Trans-telephonic Electrocardiography (ECG) monitoring is used to diagnose of infrequent cardiac arrhythmias and improve outcome of early detection and treatment on suspected cardiac patients. The objectives of this study were to explore incidence of cardiac arrhythmia using Trans-Telephonic and to explore time to first symptomatic episode and documented cardiac arrhythmia in outpatients. Methods: Descriptive research study was conducted between February 1, 2016, and December 31, 2016. A total of 117 patients who visited outpatient clinic were purposively selected. Research instruments in this study were the personal data questionnaire and the record form of incidence of cardiac arrhythmias using Trans-Telephonic ECG recorder. Results: A total of 117 patients aged between 15-92 years old (mean age 52.7 ±17.1 years), majority of studied sample was women (64.1%). The results revealed that 387 ECGs (Average 2.88 ECGs/person, SD = 3.55, Range 0 – 21) were sent to Cardiac Monitoring Center at Coronary Care Unit. Of these, normal sinus rhythm was found mostly 46%. Top 5 of cardiac arrhythmias were documented at the time of symptoms: sinus tachycardia 43.5%, premature atrial contraction 17.7%, premature ventricular contraction 14.3%, sinus bradycardia 11.5% and atrial fibrillation 8.6%. Presenting symptom were tachycardia 94%, palpitation 83.8%, dyspnea 51.3%, chest pain 19.6%, and syncope 14.5%. Mostly activities during symptom were no activity 64.8%, sleep 55.6% and work 25.6%.The mean time until the first symptomatic episode occurred on average after 6.88 ± 7.72 days (median 3 days). The first documented cardiac arrhythmia occurred on average after 9 ± 7.92 days (median 7 day). The treatments after patients known actual cardiac arrhythmias were observe themselves 68%, continue same medications 15%, got further investigations (7 patients), and corrected causes of cardiac arrhythmias via invasive cardiac procedures (5 patients). Conclusion: Trans-telephonic: portable ECGs recorder is effective in the diagnosis of suspected symptomatic cardiac arrhythmias in outpatient clinic.Keywords: cardiac arrhythmias, diagnosis, outpatient clinic, trans-telephonic: portable ECG recorder
Procedia PDF Downloads 1941764 Lead Chalcogenide Quantum Dots for Use in Radiation Detectors
Authors: Tom Nakotte, Hongmei Luo
Abstract:
Lead chalcogenide-based (PbS, PbSe, and PbTe) quantum dots (QDs) were synthesized for the purpose of implementing them in radiation detectors. Pb based materials have long been of interest for gamma and x-ray detection due to its high absorption cross section and Z number. The emphasis of the studies was on exploring how to control charge carrier transport within thin films containing the QDs. The properties of QDs itself can be altered by changing the size, shape, composition, and surface chemistry of the dots, while the properties of carrier transport within QD films are affected by post-deposition treatment of the films. The QDs were synthesized using colloidal synthesis methods and films were grown using multiple film coating techniques, such as spin coating and doctor blading. Current QD radiation detectors are based on the QD acting as fluorophores in a scintillation detector. Here the viability of using QDs in solid-state radiation detectors, for which the incident detectable radiation causes a direct electronic response within the QD film is explored. Achieving high sensitivity and accurate energy quantification in QD radiation detectors requires a large carrier mobility and diffusion lengths in the QD films. Pb chalcogenides-based QDs were synthesized with both traditional oleic acid ligands as well as more weakly binding oleylamine ligands, allowing for in-solution ligand exchange making the deposition of thick films in a single step possible. The PbS and PbSe QDs showed better air stability than PbTe. After precipitation the QDs passivated with the shorter ligand are dispersed in 2,6-difloupyridine resulting in colloidal solutions with concentrations anywhere from 10-100 mg/mL for film processing applications, More concentrated colloidal solutions produce thicker films during spin-coating, while an extremely concentrated solution (100 mg/mL) can be used to produce several micrometer thick films using doctor blading. Film thicknesses of micrometer or even millimeters are needed for radiation detector for high-energy gamma rays, which are of interest for astrophysics or nuclear security, in order to provide sufficient stopping power.Keywords: colloidal synthesis, lead chalcogenide, radiation detectors, quantum dots
Procedia PDF Downloads 1341763 Chemical Synthesis, Characterization and Dose Optimization of Chitosan-Based Nanoparticles of MCPA for Management of Broad-Leaved Weeds (Chenopodium album, Lathyrus aphaca, Angalis arvensis and Melilotus indica) of Wheat
Authors: Muhammad Ather Nadeem, Bilal Ahmad Khan, Tasawer Abbas
Abstract:
Nanoherbicides utilize nanotechnology to enhance the delivery of biological or chemical herbicides using combinations of nanomaterials. The aim of this research was to examine the efficacy of chitosan nanoparticles containing MCPA herbicide as a potential eco-friendly alternative for weed control in wheat crops. Scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), and ultraviolet absorbance were used to analyze the developed nanoparticles. The SEM analysis indicated that the average size of the particles was 35 nm, forming clusters with a porous structure. Both nanoparticles of fluroxyper + MCPA exhibited maximal absorption peaks at a wavelength of 320 nm. The compound fluroxyper +MCPA has a strong peak at a 2θ value of 30.55°, which correlates to the 78 plane of the anatase phase. The weeds, including Chenopodium album, Lathyrus aphaca, Angalis arvensis, and Melilotus indica, were sprayed with the nanoparticles while they were in the third or fourth leaf stage. There were seven distinct dosages used: doses (D0 (Check weeds), D1 (Recommended dose of traditional herbicide, D2 (Recommended dose of Nano-herbicide (NPs-H)), D3 (NPs-H with 05-fold lower dose), D4 ((NPs-H) with 10-fold lower dose), D5 (NPs-H with 15-fold lower dose), and D6 (NPs-H with 20-fold lower dose)). The chitosan-based nanoparticles of MCPA at the prescribed dosage of conventional herbicide resulted in complete death and visual damage, with a 100% fatality rate. The dosage that was 5-fold lower exhibited the lowest levels of plant height (3.95 cm), chlorophyll content (5.63%), dry biomass (0.10 g), and fresh biomass (0.33 g) in the broad-leaved weed of wheat. The herbicide nanoparticles, when used at a dosage 10-fold lower than that of conventional herbicides, had a comparable impact on the prescribed dosage. Nano-herbicides have the potential to improve the efficiency of standard herbicides by increasing stability and lowering toxicity.Keywords: mortality, visual injury, chlorophyl contents, chitosan-based nanoparticles
Procedia PDF Downloads 681762 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT
Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar
Abstract:
X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum
Procedia PDF Downloads 4051761 Evaluation of Pozzolanic Properties of Micro and Nanofillers Origin from Waste Products
Authors: Laura Vitola, Diana Bajare, Genadijs Sahmenko, Girts Bumanis
Abstract:
About 8 % of CO2 emission in the world is produced by concrete industry therefore replacement of cement in concrete composition by additives with pozzolanic activity would give a significant impact on the environment. Material which contains silica SiO2 or amorphous silica SiO2 together with aluminum dioxide Al2O3 is called pozzolana type additives in the concrete industry. Pozzolana additives are possible to obtain from recycling industry and different production by-products such as processed bulb boric silicate (DRL type) and lead (LB type) glass, coal combustion bottom ash, utilized brick pieces and biomass ash, thus solving utilization problem which is so important in the world, as well as practically using materials which previously were considered as unusable. In the literature, there is no summarized method which could be used for quick waste-product pozzolana activity evaluation without the performance of wide researches related to the production of innumerable concrete contents and samples in the literature. Besides it is important to understand which parameters should be predicted to characterize the efficiency of waste-products. Simple methods of pozzolana activity increase for different types of waste-products are also determined. The aim of this study is to evaluate effectiveness of the different types of waste materials and industrial by-products (coal combustion bottom ash, biomass ash, waste glass, waste kaolin and calcined illite clays), and determine which parameters have the greatest impact on pozzolanic activity. By using materials, which previously were considered as unusable and landfilled, in concrete industry basic utilization problems will be partially solved. The optimal methods for treatment of waste materials and industrial by–products were detected with the purpose to increase their pozzolanic activity and produce substitutes for cement in the concrete industry. Usage of mentioned pozzolanic allows us to replace of necessary cement amount till 20% without reducing the compressive strength of concrete.Keywords: cement substitutes, micro and nano fillers, pozzolanic properties, specific surface area, particle size, waste products
Procedia PDF Downloads 4301760 The Investigate Relationship between Moral Hazard and Corporate Governance with Earning Forecast Quality in the Tehran Stock Exchange
Authors: Fatemeh Rouhi, Hadi Nassiri
Abstract:
Earning forecast is a key element in economic decisions but there are some situations, such as conflicts of interest in financial reporting, complexity and lack of direct access to information has led to the phenomenon of information asymmetry among individuals within the organization and external investors and creditors that appear. The adverse selection and moral hazard in the investor's decision and allows direct assessment of the difficulties associated with data by users makes. In this regard, the role of trustees in corporate governance disclosure is crystallized that includes controls and procedures to ensure the lack of movement in the interests of the company's management and move in the direction of maximizing shareholder and company value. Therefore, the earning forecast of companies in the capital market and the need to identify factors influencing this study was an attempt to make relationship between moral hazard and corporate governance with earning forecast quality companies operating in the capital market and its impact on Earnings Forecasts quality by the company to be established. Getting inspiring from the theoretical basis of research, two main hypotheses and sub-hypotheses are presented in this study, which have been examined on the basis of available models, and with the use of Panel-Data method, and at the end, the conclusion has been made at the assurance level of 95% according to the meaningfulness of the model and each independent variable. In examining the models, firstly, Chow Test was used to specify either Panel Data method should be used or Pooled method. Following that Housman Test was applied to make use of Random Effects or Fixed Effects. Findings of the study show because most of the variables are positively associated with moral hazard with earnings forecasts quality, with increasing moral hazard, earning forecast quality companies listed on the Tehran Stock Exchange is increasing. Among the variables related to corporate governance, board independence variables have a significant relationship with earnings forecast accuracy and earnings forecast bias but the relationship between board size and earnings forecast quality is not statistically significant.Keywords: corporate governance, earning forecast quality, moral hazard, financial sciences
Procedia PDF Downloads 3261759 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech
Authors: Monica Gonzalez Machorro
Abstract:
Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment
Procedia PDF Downloads 1291758 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy
Procedia PDF Downloads 1271757 The Relationship between Central Bank Independence and Inflation: Evidence from Africa
Authors: R. Bhattu Babajee, Marie Sandrine Estelle Benoit
Abstract:
The past decades have witnessed a considerable institutional shift towards Central Bank Independence across economies of the world. The motivation behind such a change is the acceptance that increased central bank autonomy has the power of alleviating inflation bias. Hence, studying whether Central Bank Independence acts as a significant factor behind the price stability in the African economies or whether this macroeconomic aim in these countries result from other economic, political or social factors is a pertinent issue. The main research objective of this paper is to assess the relationship between central bank autonomy and inflation in African economies where inflation has proved to be a serious problem. In this optic, we shall measure the degree of CBI in Africa by computing the turnover rates of central banks governors thereby studying whether decisions made by African central banks are affected by external forces. The purpose of this study is to investigate empirically the association between Central Bank Independence (CBI) and inflation for 10 African economies over a period of 17 years, from 1995 to 2012. The sample includes Botswana, Egypt, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Nigeria, South Africa, and Uganda. In contrast to empirical research, we have not been using the usual static panel model for it is associated with potential mis specification arising from the absence of dynamics. To this issue a dynamic panel data model which integrates several control variables has been used. Firstly, the analysis includes dynamic terms to explain the tenacity of inflation. Given the confirmation of inflation inertia, that is very likely in African countries there exists the need for including lagged inflation in the empirical model. Secondly, due to known reverse causality between Central Bank Independence and inflation, the system generalized method of moments (GMM) is employed. With GMM estimators, the presence of unknown forms of heteroskedasticity is admissible as well as auto correlation in the error term. Thirdly, control variables have been used to enhance the efficiency of the model. The main finding of this paper is that central bank independence is negatively associated with inflation even after including control variables.Keywords: central bank independence, inflation, macroeconomic variables, price stability
Procedia PDF Downloads 3701756 Ownership and Shareholder Schemes Effects on Airport Corporate Strategy in Europe
Authors: Dimitrios Dimitriou, Maria Sartzetaki
Abstract:
In the early days of the of civil aviation, airports are totally state-owned companies under the control of national authorities or regional governmental bodies. From that time the picture has totally changed and airports privatisation and airport business commercialisation are key success factors to stimulate air transport demand, generate revenues and attract investors, linked to reliable and resilience of air transport system. Nowadays, airport's corporate strategy deals with policies and actions, affecting essential the business plans, the financial targets and the economic footprint in a regional economy they serving. Therefore, exploring airport corporate strategy is essential to support the decision in business planning, management efficiency, sustainable development and investment attractiveness on one hand; and define policies towards traffic development, revenues generation, capacity expansion, cost efficiency and corporate social responsibility. This paper explores key outputs in airport corporate strategy for different ownership schemes. The airport corporations are grouped in three major schemes: (a) Public, in which the public airport operator acts as part of the government administration or as a corporised public operator; (b) Mixed scheme, in which the majority of the shares and the corporate strategy is driven by the private or the public sector; and (c) Private, in which the airport strategy is driven by the key aspects of globalisation and liberalisation of the aviation sector. By a systemic approach, the key drivers in corporate strategy for modern airport business structures are defined. Key objectives are to define the key strategic opportunities and challenges and assess the corporate goals and risks towards sustainable business development for each scheme. The analysis based on an extensive cross-sectional dataset for a sample of busy European airports providing results on corporate strategy key priorities, risks and business models. The conventional wisdom is to highlight key messages to authorities, institutes and professionals on airport corporate strategy trends and directions.Keywords: airport corporate strategy, airport ownership, airports business models, corporate risks
Procedia PDF Downloads 311