Search results for: first-mover advantage
436 Numerical Investigation of Indoor Environmental Quality in a Room Heated with Impinging Jet Ventilation
Authors: Mathias Cehlin, Arman Ameen, Ulf Larsson, Taghi Karimipanah
Abstract:
The indoor environmental quality (IEQ) is increasingly recognized as a significant factor influencing the overall level of building occupants’ health, comfort and productivity. An air-conditioning and ventilation system is normally used to create and maintain good thermal comfort and indoor air quality. Providing occupant thermal comfort and well-being with minimized use of energy is the main purpose of heating, ventilating and air conditioning system. Among different types of ventilation systems, the most widely known and used ventilation systems are mixing ventilation (MV) and displacement ventilation (DV). Impinging jet ventilation (IJV) is a promising ventilation strategy developed in the beginning of 2000s. IJV has the advantage of supplying air downwards close to the floor with high momentum and thereby delivering fresh air further out in the room compare to DV. Operating in cooling mode, IJV systems can have higher ventilation effectiveness and heat removal effectiveness compared to MV, and therefore a higher energy efficiency. However, how is the performance of IJV when operating in heating mode? This paper presents the function of IJV in a typical office room for winter conditions (heating mode). In this paper, a validated CFD model, which uses the v2-f model is used for the prediction of air flow pattern, thermal comfort and air change effectiveness. The office room under consideration has the dimensions 4.2×3.6×2.5m, which can be designed like a single-person or two-person office. A number of important factors influencing in the room with IJV are studied. The considered parameters are: heating demand, number of occupants and supplied air conditions. A total of 6 simulation cases are carried out to investigate the effects of the considered parameters. Heat load in the room is contributed by occupants, computer and lighting. The model consists of one external wall including a window. The interaction effects of heat sources, supply air flow and down draught from the window result in a complex flow phenomenon. Preliminary results indicate that IJV can be used for heating of a typical office room. The IEQ seems to be suitable in the occupied region for the studied cases.Keywords: computation fluid dynamics, impinging jet ventilation, indoor environmental quality, ventilation strategy
Procedia PDF Downloads 179435 Creating a Rehabilitation Product as an Example of Design Management
Authors: K. Caban-Piaskowska
Abstract:
The aim of the article is to show how the role of a designer has changed, from the point of view of human resources management and thanks to the increased importance of design management, and is to present how a rehabilitation product, through technology approach to designing, becomes a universal product. Designing for the disabled is a very undiscovered area on the pattern-designing market, most often because it is associated with devices which support rehabilitation. In consequence, it means that the realizations have a limited group of receivers and are not that attractive for designers. The relation between using modern design in building rehabilitation devices and increasing the efficiency of treatment and physiotherapy. Using modern technology can have marketing significance. Rehabilitation products designed and produced in a modern way makes an impression that experts and professionals are involved in the lives of the user – patient. In order to illustrate the problem presented above i.e. Creating a rehabilitation product as an example of design management, the case study method was used in the research. The analysis of the case was created on the basis of an interview conducted by the author with a designer who took part in meetings with people who use rehabilitation and their physiotherapists, and created universal products in Poland in the years of 2012 to 2017. Usually, engineers and constructors deal with creating products which remind us of old torture devices, however, they are indestructible in construction. Such image of those products for the disabled clearly indicates that it is a wonderful niche for designers and emphasizes the need to make those products more attractive and innovative. Products for the disabled cannot be limited to rehabilitation equipment only e.g. wheelchairs or standing frames. Introducing the idea of universal designing can significantly broaden the circle of pattern-designing receivers – everyday-use items – with the disabled people. Fulfilling these criteria will decide about the advantage on the competitive market. It is possible due to the usage of the design management concept in the functioning of an organization. Using modern technology and materials in the production of equipment, and changing the role of a designer broadening the circle of receivers by designing a wide use process which makes it possible to use the product by people with various needs. What is more, introducing rehabilitation functions in everyday-use items can also become an innovative accent in designing. In the reality of the market, each group of users can and should be treated as a problem and a realization task.Keywords: design management, innovation, rehabilitation product, universal product
Procedia PDF Downloads 195434 Slosh Investigations on a Spacecraft Propellant Tank for Control Stability Studies
Authors: Sarath Chandran Nair S, Srinivas Kodati, Vasudevan R, Asraff A. K
Abstract:
Spacecrafts generally employ liquid propulsion for their attitude and orbital maneuvers or raising it from geo-transfer orbit to geosynchronous orbit. Liquid propulsion systems use either mono-propellant or bi-propellants for generating thrust. These propellants are generally stored in either spherical tanks or cylindrical tanks with spherical end domes. The propellant tanks are provided with a propellant acquisition system/propellant management device along with vanes and their conical mounting structure to ensure propellant availability in the outlet for thrust generation even under a low/zero-gravity environment. Slosh is the free surface oscillations in partially filled containers under external disturbances. In a spacecraft, these can be due to control forces and due to varying acceleration. Knowledge of slosh and its effect due to internals is essential for understanding its stability through control stability studies. It is mathematically represented by a pendulum-mass model. It requires parameters such as slosh frequency, damping, sloshes mass and its location, etc. This paper enumerates various numerical and experimental methods used for evaluating the slosh parameters required for representing slosh. Numerical methods like finite element methods based on linear velocity potential theory and computational fluid dynamics based on Reynolds Averaged Navier Stokes equations are used for the detailed evaluation of slosh behavior in one of the spacecraft propellant tanks used in an Indian space mission. Experimental studies carried out on a scaled-down model are also discussed. Slosh parameters evaluated by different methods matched very well and finalized their dispersion bands based on experimental studies. It is observed that the presence of internals such as propellant management devices, including conical support structure, alters slosh parameters. These internals also offers one order higher damping compared to viscous/ smooth wall damping. It is an advantage factor for the stability of slosh. These slosh parameters are given for establishing slosh margins through control stability studies and finalize the spacecraft control system design.Keywords: control stability, propellant tanks, slosh, spacecraft, slosh spacecraft
Procedia PDF Downloads 245433 Lipid from Activated Sludge as a Feedstock for the Production of Biodiesel
Authors: Ifeanyichukwu Edeh, Tim Overton, Steve Bowra
Abstract:
There is increasing interest in utilising low grade or waste biomass for the production of renewable bioenergy vectors i.e. waste to energy. In this study we have chosen to assess, activated sludge, which is a microbial biomass generated during the second stage of waste water treatment as a source of lipid for biodiesel production. To date a significant proportion of biodiesel is produced from used cooking oil and animal fats. It was reasoned that if activated sludge proved a viable feedstock it has the potential to support increase biodiesel production capacity. Activated sludge was obtained at different times of the year and from two different sewage treatment works in the UK. The biomass within the activated sludge slurry was recovered by filtration and the total weight of material calculated by combining the dry weight of the total suspended solid (TSS) and the total dissolved solid (TDS) fractions. Total lipids were extracted from the TSS and TDS using solvent extraction (Folch methods). The classes of lipids within the total lipid extract were characterised using high performance thin layer chromatography (HPTLC) by referencing known standards. The fatty acid profile and content of the lipid extract were determined using acid mediated-methanolysis to obtain fatty acid methyl esters (FAMEs) which were analysed by gas chromatography and HPTLC. The results showed that there were differences in the total biomass content in the activated sludge collected from different sewage works. Lipid yields from TSS obtained from both sewage treatment works differed according to the time of year (between 3.0 and 7.4 wt. %). The lipid yield varied slightly within the same source of biomass but more widely between the two sewage treatment works. The neutral lipid classes identified were acylglycerols, free fatty acids, sterols and wax esters while the phospholipid class included phosphatidylcholine, lysophosphatidycholine, phosphatidylethanolamine and phosphatidylinositol. The fatty acid profile revealed the presence of palmitic acid, palmitoleic acid, linoleic acid, oleic acid and stearic acid and that unsaturated fatty acids were the most abundant. Following optimisation, the FAME yield was greater than 10 wt. % which was required to have an economic advantage in biodiesel production.Keywords: activated sludge, biodiesel, lipid, methanolysis
Procedia PDF Downloads 472432 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach
Authors: Jiaxin Chen
Abstract:
Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification
Procedia PDF Downloads 94431 The Strategic Engine Model: Redefined Strategy Structure, as per Market-and Resource-Based Theory Application, Tested in the Automotive Industry
Authors: Krassimir Todorov
Abstract:
The purpose of the paper is to redefine the levels of structure of corporate, business and functional strategies that were established over the past several decades, to a conceptual model, consisting of corporate, business and operations strategies, that are reinforced by functional strategies. We will propose a conceptual framework of different perspectives in the role of strategic operations as a separate strategic place and reposition the remaining functional strategies as supporting tools, existing at all three levels. The proposed model is called ‘the strategic engine’, since the mutual relationships of its ingredients are identical with main elements and working principle of the internal combustion engine. Based on theoretical essence, related to every strategic level, we will prove that the strategic engine model is useful for managers seeking to safeguard the competitive advantage of their companies. Each strategy level is researched through its basic elements. At the corporate level we examine the scope of firm’s product, the vertical and geographical coverage. At the business level, the point of interest is limited to the SWOT analysis’ basic elements. While at operations level, the key research issue relates to the scope of the following performance indicators: cost, quality, speed, flexibility and dependability. In this relationship, the paper provides a different view for the role of operations strategy within the overall strategy concept. We will prove that the theoretical essence of operations goes far beyond the scope of traditionally accepted business functions. Exploring the applications of Resource-based theory and Market-based theory within the strategic levels framework, we will prove that there is a logical consequence of the theoretical impact in corporate, business and operations strategy – at every strategic level, the validity of one theory is substituted to the level of the other. Practical application of the conceptual model is tested in automotive industry. Actually, the proposed theoretical concept is inspired by a leading global automotive group – Inchcape PLC, listed on the London Stock Exchange, and constituent of the FTSE 250 Index.Keywords: business strategy, corporate strategy, functional strategies, operations strategy
Procedia PDF Downloads 173430 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods
Authors: Jularat Chumnaul
Abstract:
In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.Keywords: skeletal measurements, classification, cluster, apparent error rate
Procedia PDF Downloads 252429 Health Monitoring of Composite Pile Construction Using Fiber Bragg Gratings Sensor Arrays
Authors: B. Atli-Veltin, A. Vosteen, D. Megan, A. Jedynska, L. K. Cheng
Abstract:
Composite materials combine the advantages of being lightweight and possessing high strength. This is in particular of interest for the development of large constructions, e.g., aircraft, space applications, wind turbines, etc. One of the shortcomings of using composite materials is the complex nature of the failure mechanisms which makes it difficult to predict the remaining lifetime. Therefore, condition and health monitoring are essential for using composite material for critical parts of a construction. Different types of sensors are used/developed to monitor composite structures. These include ultrasonic, thermography, shearography and fiber optic. The first 3 technologies are complex and mostly used for measurement in laboratory or during maintenance of the construction. Optical fiber sensor can be surface mounted or embedded in the composite construction to provide the unique advantage of in-operation measurement of mechanical strain and other parameters of interest. This is identified to be a promising technology for Structural Health Monitoring (SHM) or Prognostic Health Monitoring (PHM) of composite constructions. Among the different fiber optic sensing technologies, Fiber Bragg Grating (FBG) sensor is the most mature and widely used. FBG sensors can be realized in an array configuration with many FBGs in a single optical fiber. In the current project, different aspects of using embedded FBG for composite wind turbine monitoring are investigated. The activities are divided into two parts. Firstly, FBG embedded carbon composite laminate is subjected to tensile and bending loading to investigate the response of FBG which are placed in different orientations with respect to the fiber. Secondly, the demonstration of using FBG sensor array for temperature and strain sensing and monitoring of a 5 m long scale model of a glass fiber mono-pile is investigated. Two different FBG types are used; special in-house fibers and off-the-shelf ones. The results from the first part of the study are showing that the FBG sensors survive the conditions during the production of the laminate. The test results from the tensile and the bending experiments are indicating that the sensors successfully response to the change of strain. The measurements from the sensors will be correlated with the strain gauges that are placed on the surface of the laminates.Keywords: Fiber Bragg Gratings, embedded sensors, health monitoring, wind turbine towers
Procedia PDF Downloads 243428 Factors That Determine International Competitiveness of Agricultural Products in Latin America 1990-2020
Authors: Oluwasefunmi Eunice Irewole, Enrique Armas Arévalos
Abstract:
Agriculture has played a crucial role in the economy and the development of many countries. Moreover, the basic needs for human survival are; food, shelter, and cloth are link on agricultural production. Most developed countries see that agriculture provides them with food and raw materials for different goods such as (shelter, medicine, fuel and clothing) which has led to an increase in incomes, livelihoods and standard of living. This study aimed at analysing the relationship between International competitiveness of agricultural products, with the area, fertilizer, labour force, economic growth, foreign direct investment, exchange rate and inflation rate in Latin America during the period of 1991-to 2019. In this study, panel data econometric methods were used, as well as cross-section dependence (Pesaran test), unit root (cross-section Augumented Dickey Fuller and Cross-sectional Im, Pesaran, and Shin tests), cointergration (Pedroni and Fisher-Johansen tests), and heterogeneous causality (Pedroni and Fisher-Johansen tests) (Hurlin and Dumitrescu test). The results reveal that the model has cross-sectional dependency and that they are integrated at one I. (1). The "fully modified OLS and dynamic OLS estimators" were used to examine the existence of a long-term relationship, and it was found that a long-term relationship existed between the selected variables. The study revealed a positive significant relationship between International Competitiveness of the agricultural raw material and area, fertilizer, labour force, economic growth, and foreign direct investment, while international competitiveness has a negative relationship with the advantages of the exchange rate and inflation. The economy policy recommendations deducted from this investigation is that Foreign Direct Investment and the labour force have a positive contribution to the increase of International Competitiveness of agricultural products.Keywords: revealed comparative advantage, agricultural products, area, fertilizer, economic growth, granger causality, panel unit root
Procedia PDF Downloads 100427 A Method To Assess Collaboration Using Perception of Risk from the Architectural Engineering Construction Industry
Authors: Sujesh F. Sujan, Steve W. Jones, Arto Kiviniemi
Abstract:
The use of Building Information Modelling (BIM) in the Architectural-Engineering-Construction (AEC) industry is a form of systemic innovation. Unlike incremental innovation, (such as the technological development of CAD from hand based drawings to 2D electronically printed drawings) any form of systemic innovation in Project-Based Inter-Organisational Networks requires complete collaboration and results in numerous benefits if adopted and utilised properly. Proper use of BIM involves people collaborating with the use of interoperable BIM compliant tools. The AEC industry globally has been known for its adversarial and fragmented nature where firms take advantage of one another to increase their own profitability. Due to the industry’s nature, getting people to collaborate by unifying their goals is critical to successful BIM adoption. However, this form of innovation is often being forced artificially in the old ways of working which do not suit collaboration. This may be one of the reasons for its low global use even though the technology was developed more than 20 years ago. Therefore, there is a need to develop a metric/method to support and allow industry players to gain confidence in their investment into BIM software and workflow methods. This paper departs from defining systemic risk as a risk that affects all the project participants at a given stage of a project and defines categories of systemic risks. The need to generalise is to allow method applicability to any industry where the category will be the same, but the example of the risk will depend on the industry the study is done in. The method proposed seeks to use individual perception of an example of systemic risk as a key parameter. The significance of this study lies in relating the variance of individual perception of systemic risk to how much the team is collaborating. The method bases its notions on the claim that a more unified range of individual perceptions would mean a higher probability that the team is collaborating better. Since contracts and procurement devise how a project team operates, the method could also break the methodological barrier of highly subjective findings that case studies inflict, which has limited the possibility of generalising between global industries. Since human nature applies in all industries, the authors’ intuition is that perception can be a valuable parameter to study collaboration which is essential especially in projects that utilise systemic innovation such as BIM.Keywords: building information modelling, perception of risk, systemic innovation, team collaboration
Procedia PDF Downloads 184426 Mg Doped CuCrO₂ Thin Oxides Films for Thermoelectric Properties
Authors: I. Sinnarasa, Y. Thimont, L. Presmanes, A. Barnabé
Abstract:
The thermoelectricity is a promising technique to overcome the issues in recovering waste heat to electricity without using moving parts. In fact, the thermoelectric (TE) effect defines as the conversion of a temperature gradient directly into electricity and vice versa. To optimize TE materials, the power factor (PF = σS² where σ is electrical conductivity and S is Seebeck coefficient) must be increased by adjusting the carrier concentration, and/or the lattice thermal conductivity Kₜₕ must be reduced by introducing scattering centers with point defects, interfaces, and nanostructuration. The PF does not show the advantages of the thin film because it does not take into account the thermal conductivity. In general, the thermal conductivity of the thin film is lower than the bulk material due to their microstructure and increasing scattering effects with decreasing thickness. Delafossite type oxides CuᴵMᴵᴵᴵO₂ received main attention for their optoelectronic properties as a p-type semiconductor they exhibit also interesting thermoelectric (TE) properties due to their high electrical conductivity and their stability in room atmosphere. As there are few proper studies on the TE properties of Mg-doped CuCrO₂ thin films, we have investigated, the influence of the annealing temperature on the electrical conductivity and the Seebeck coefficient of Mg-doped CuCrO₂ thin films and calculated the PF in the temperature range from 40 °C to 220 °C. For it, we have deposited Mg-doped CuCrO₂ thin films on fused silica substrates by RF magnetron sputtering. This study was carried out on 300 nm thin films. The as-deposited Mg doped CuCrO₂ thin films have been annealed at different temperatures (from 450 to 650 °C) under primary vacuum. Electrical conductivity and Seebeck coefficient of the thin films have been measured from 40 to 220 °C. The highest electrical conductivity of 0.60 S.cm⁻¹ with a Seebeck coefficient of +329 µV.K⁻¹ at 40 °C have been obtained for the sample annealed at 550 °C. The calculated power factor of optimized CuCrO₂:Mg thin film was 6 µW.m⁻¹K⁻² at 40 °C. Due to the constant Seebeck coefficient and the increasing electrical conductivity with temperature it reached 38 µW.m⁻¹K⁻² at 220 °C that was a quite good result for an oxide thin film. Moreover, the degenerate behavior and the hopping mechanism of CuCrO₂:Mg thin film were elucidated. Their high and constant Seebeck coefficient in temperature and their stability in room atmosphere could be a great advantage for an application of this material in a high accuracy temperature measurement devices.Keywords: thermoelectric, oxides, delafossite, thin film, power factor, degenerated semiconductor, hopping mode
Procedia PDF Downloads 199425 The Ambivalent Dealing with Diversity: An Ethnographic Study of Diversity and Its Different Faces of Managing in a Mixed Neighborhood in Germany
Authors: Nina Berding
Abstract:
Migration and the ensuing diversity are integral parts of urban societies. However, engaging with the urban society and its diversification is rarely perceived as something trivial but rather as a difficult task and a major challenge. A central aspect of the discourse is the current migration of refugees from countries of the southern hemisphere to Europe and the resulting challenges for cities, their municipalities and the civil society as a whole. Based on exploratory field research in a German inner-city neighborhood, it is aimed to show that the discourses about migration and diversity are completely contrary to the everyday life actions of the urban society. Processes of migration that include leaving one’s hometown and moving to other places, searching for ‘safe’ environments or better opportunities are, historically speaking, not a new phenomenon. The urban dwellers have a large repertoire of strategies in managing processes of difference in everyday life situations, guided them well for centuries and also in these contemporary processes with an increased mobility and diversity. So there is obviously a considerable discrepancy between what is practically lived in everyday life, and how it is talked about. The results of the study demonstrate that the current discourse about the challenges of migration seems to legitimize interventions beyond humanist approaches where migrants serve as collective scapegoats for social problems and affected by different discrimination and criminalization processes. On the one hand, everyone takes advantage of the super-mobility and super-diversity in their daily lives and on the other hand, powerful stakeholders and designated authorities operate a sort of retro- nationalism and identity collectivism. Political players, the municipalities and other stakeholders then follow an urban public policy that takes actions (increasing police presence, concepts and activities for special groups, exclusion from active social life, preventing participation etc.) towards different ‘groups’ of residents, produced along ‘ethnic’ lines. The results also show that, despite the obstacles and adversities placed in their way, the excluded residents perpetually relocate and re-position themselves and attempt to empower themselves by redefining their identities in their neighborhood.Keywords: coexistence, everyday life, migration and diversity regimes, urban policy
Procedia PDF Downloads 247424 Pediatric Hearing Aid Use: A Study Based on Data Logging Information
Authors: Mina Salamatmanesh, Elizabeth Fitzpatrick, Tim Ramsay, Josee Lagacé, Lindsey Sikora, JoAnne Whittingham
Abstract:
Introduction: Hearing loss (HL) is one of the most common disorders that presents at birth and in early childhood. Universal newborn hearing screening (UNHS) has been adopted based on the assumption that with early identification of HL, children will have access to optimal amplification and intervention at younger ages, therefore, taking advantage of the brain’s maximal plasticity. One particular challenge for parents in the early years is achieving consistent hearing aid (HA) use which is critical to the child’s development and constitutes the first step in the rehabilitation process. This study examined the consistency of hearing aid use in young children based on data logging information documented during audiology sessions in the first three years after hearing aid fitting. Methodology: The first 100 children who were diagnosed with bilateral HL before 72 months of age since 2003 to 2015 in a pediatric audiology clinic and who had at least two hearing aid follow-up sessions with available data logging information were included in the study. Data from each audiology session (age of child at the session, average hours of use per day (for each ear) in the first three years after HA fitting) were collected. Clinical characteristics (degree of hearing loss, age of HA fitting) were also documented to further understanding of factors that impact HA use. Results: Preliminary analysis of the results of the first 20 children shows that all of them (100%) have at least one data logging session recorded in the clinical audiology system (Noah). Of the 20 children, 17(85%) have three data logging events recorded in the first three years after HA fitting. Based on the statistical analysis of the first 20 cases, the median hours of use in the first follow-up session after the hearing aid fitting in the right ear is 3.9 hours with an interquartile range (IQR) of 10.2h. For the left ear the median is 4.4 and the IQR is 9.7h. In the first session 47% of the children use their hearing aids ≤5 hours, 12% use them between 5 to 10 hours and 22% use them ≥10 hours a day. However, these children showed increased use by the third follow-up session with a median (IQR) of 9.1 hours for the right ear and 2.5, and of 8.2 hours for left ear (IQR) IQR is 5.6 By the third follow-up session, 14% of children used hearing aids ≤5 hours, while 38% of children used them ≥10 hours. Based on the primary results, factors like age and level of HL significantly impact the hours of use. Conclusion: The use of data logging information to assess the actual hours of HA provides an opportunity to examine the: a) challenges of families of young children with HAs, b) factors that impact use in very young children. Data logging when used collaboratively with parents, can be a powerful tool to identify problems and to encourage and assist families in maximizing their child’s hearing potential.Keywords: hearing loss, hearing aid, data logging, hours of use
Procedia PDF Downloads 230423 Creative Mathematically Modelling Videos Developed by Engineering Students
Authors: Esther Cabezas-Rivas
Abstract:
Ordinary differential equations (ODE) are a fundamental part of the curriculum for most engineering degrees, and students typically have difficulties in the subsequent abstract mathematical calculations. To enhance their motivation and profit that they are digital natives, we propose a teamwork project that includes the creation of a video. It should explain how to model mathematically a real-world problem transforming it into an ODE, which should then be solved using the tools learned in the lectures. This idea was indeed implemented with first-year students of a BSc in Engineering and Management during the period of online learning caused by the outbreak of COVID-19 in Spain. Each group of 4 students was assigned a different topic: model a hot water heater, search for the shortest path, design the quickest route for delivery, cooling a computer chip, the shape of the hanging cables of the Golden Gate, detecting land mines, rocket trajectories, etc. These topics should be worked out through two complementary channels: a written report describing the problem and a 10-15 min video on the subject. The report includes the following items: description of the problem to be modeled, detailed obtention of the ODE that models the problem, its complete solution, and interpretation in the context of the original problem. We report the outcomes of this teaching in context and active learning experience, including the feedback received by the students. They highlighted the encouragement of creativity and originality, which are skills that they do not typically relate to mathematics. Additionally, the video format (unlike a common presentation) has the advantage of allowing them to critically review and self-assess the recording, repeating some parts until the result is satisfactory. As a side effect, they felt more confident about their oral abilities. In short, students agreed that they had fun preparing the video. They recognized that it was tricky to combine deep mathematical contents with entertainment since, without the latter, it is impossible to engage people to view the video till the end. Despite this difficulty, after the activity, they claimed to understand better the material, and they enjoyed showing the videos to family and friends during and after the project.Keywords: active learning, contextual teaching, models in differential equations, student-produced videos
Procedia PDF Downloads 145422 A Smartphone-Based Real-Time Activity Recognition and Fall Detection System
Authors: Manutchanok Jongprasithporn, Rawiphorn Srivilai, Paweena Pongsopha
Abstract:
Fall is the most serious accident leading to increased unintentional injuries and mortality. Falls are not only the cause of suffering and functional impairments to the individuals, but also the cause of increasing medical cost and days away from work. The early detection of falls could be an advantage to reduce fall-related injuries and consequences of falls. Smartphones, embedded accelerometer, have become a common device in everyday life due to decreasing technology cost. This paper explores a physical activity monitoring and fall detection application in smartphones which is a non-invasive biomedical device to determine physical activities and fall event. The combination of application and sensors could perform as a biomedical sensor to monitor physical activities and recognize a fall. We have chosen Android-based smartphone in this study since android operating system is an open-source and no cost. Moreover, android phone users become a majority of Thai’s smartphone users. We developed Thai 3 Axis (TH3AX) as a physical activities and fall detection application which included command, manual, results in Thai language. The smartphone was attached to right hip of 10 young, healthy adult subjects (5 males, 5 females; aged< 35y) to collect accelerometer and gyroscope data during performing physical activities (e.g., walking, running, sitting, and lying down) and falling to determine threshold for each activity. Dependent variables are including accelerometer data (acceleration, peak acceleration, average resultant acceleration, and time between peak acceleration). A repeated measures ANOVA was performed to test whether there are any differences between DVs’ means. Statistical analyses were considered significant at p<0.05. After finding threshold, the results were used as training data for a predictive model of activity recognition. In the future, accuracies of activity recognition will be performed to assess the overall performance of the classifier. Moreover, to help improve the quality of life, our system will be implemented with patients and elderly people who need intensive care in hospitals and nursing homes in Thailand.Keywords: activity recognition, accelerometer, fall, gyroscope, smartphone
Procedia PDF Downloads 692421 Specialised Financial Institutions and its Role in the Promotion of Small and Medium Enterprises in Kerala, India
Authors: K. V. Venugopalan
Abstract:
Micro, Small and Medium Enterprises (MSMEs) have been accepted as the engine of economic growth and for promoting equitable development. The major advantage of the sector is its employment potential at low capital cost. The labour intensity of the MSME sector is much higher than that of the large enterprises. The MSMEs constitute over 90% of total enterprises in most of the economies and are credited with generating the highest rates of employment growth and account for a major share of industrial production and exports. Kerala is a small state in India with the limited land area with high potential in educated human resources need micro, small and medium enterprises for development. Kerala has the highest Physical Quality of Life Index (PQLI) in India and the highest Human Development Index (HDI) at par with the developed countries SME play an important role in alleviating poverty and contribute significantly towards the growth of developing economies. Financial institutions can play a vital role for the promotion of micro, small and medium enterprises in Kerala. The study entitled “Financial Institutions and its role in the promotion of Small and Medium Enterprises in Kerala “examine the progress of MSME in Kerala and India and also the role of financial institutions and the problems faced by entrepreneurs for getting advances with reference to ‘Kerala Financial Corporation’-an agency set up by the government for promoting small and medium enterprises in the state. This study is based on both secondary and primary data. Primary data for the study was collected from those entrepreneurs who availed advances from financial institutions. The secondary data include the investment made, goods and services provided, the employment generated and the number of units registered in MSME sector for the last 10 years in Kerala. The study concluded that financial institutions providing finance with simple procedures and charging smaller interest rates will increase the number of MSME's and also contribute gross state domestic product and reduce the unemployment problem and poverty in the economy.Keywords: gross state domestic product, human development index, micro, small and medium enterprises
Procedia PDF Downloads 410420 Preliminary Study of Gold Nanostars/Enhanced Filter for Keratitis Microorganism Raman Fingerprint Analysis
Authors: Chi-Chang Lin, Jian-Rong Wu, Jiun-Yan Chiu
Abstract:
Myopia, ubiquitous symptom that is necessary to correct the eyesight by optical lens struggles many people for their daily life. Recent years, younger people raise interesting on using contact lens because of its convenience and aesthetics. In clinical, the risk of eye infections increases owing to the behavior of incorrectly using contact lens unsupervised cleaning which raising the infection risk of cornea, named ocular keratitis. In order to overcome the identification needs, new detection or analysis method with rapid and more accurate identification for clinical microorganism is importantly needed. In our study, we take advantage of Raman spectroscopy having unique fingerprint for different functional groups as the distinct and fast examination tool on microorganism. As we know, Raman scatting signals are normally too weak for the detection, especially in biological field. Here, we applied special SERS enhancement substrates to generate higher Raman signals. SERS filter we designed in this article that prepared by deposition of silver nanoparticles directly onto cellulose filter surface and suspension nanoparticles - gold nanostars (AuNSs) also be introduced together to achieve better enhancement for lower concentration analyte (i.e., various bacteria). Research targets also focusing on studying the shape effect of synthetic AuNSs, needle-like surface morphology may possible creates more hot-spot for getting higher SERS enhance ability. We utilized new designed SERS technology to distinguish the bacteria from ocular keratitis under strain level, and specific Raman and SERS fingerprint were grouped under pattern recognition process. We reported a new method combined different SERS substrates can be applied for clinical microorganism detection under strain level with simple, rapid preparation and low cost. Our presenting SERS technology not only shows the great potential for clinical bacteria detection but also can be used for environmental pollution and food safety analysis.Keywords: bacteria, gold nanostars, Raman spectroscopy surface-enhanced Raman scattering filter
Procedia PDF Downloads 168419 Educating the Educators: Interdisciplinary Approaches to Enhance Science Teaching
Authors: Denise Levy, Anna Lucia C. H. Villavicencio
Abstract:
In a rapid-changing world, science teachers face considerable challenges. In addition to the basic curriculum, there must be included several transversal themes, which demand creative and innovative strategies to be arranged and integrated to traditional disciplines. In Brazil, nuclear science is still a controversial theme, and teachers themselves seem to be unaware of the issue, most often perpetuating prejudice, errors and misconceptions. This article presents the authors’ experience in the development of an interdisciplinary pedagogical proposal to include nuclear science in the basic curriculum, in a transversal and integrating way. The methodology applied was based on the analysis of several normative documents that define the requirements of essential learning, competences and skills of basic education for all schools in Brazil. The didactic materials and resources were developed according to the best practices to improve learning processes privileging constructivist educational techniques, with emphasis on active learning process, collaborative learning and learning through research. The material consists of an illustrated book for students, a book for teachers and a manual with activities that can articulate nuclear science to different disciplines: Portuguese, mathematics, science, art, English, history and geography. The content counts on high scientific rigor and articulate nuclear technology with topics of interest to society in the most diverse spheres, such as food supply, public health, food safety and foreign trade. Moreover, this pedagogical proposal takes advantage of the potential value of digital technologies, implementing QR codes that excite and challenge students of all ages, improving interaction and engagement. The expected results include the education of the educators for nuclear science communication in a transversal and integrating way, demystifying nuclear technology in a contextualized and significant approach. It is expected that the interdisciplinary pedagogical proposal contributes to improving attitudes towards knowledge construction, privileging reconstructive questioning, fostering a culture of systematic curiosity and encouraging critical thinking skills.Keywords: science education, interdisciplinary learning, nuclear science, scientific literacy
Procedia PDF Downloads 133418 A Study on Adsorption Ability of MnO2 Nanoparticles to Remove Methyl Violet Dye from Aqueous Solution
Authors: Zh. Saffari, A. Naeimi, M. S. Ekrami-Kakhki, Kh. Khandan-Barani
Abstract:
The textile industries are becoming a major source of environmental contamination because an alarming amount of dye pollutants are generated during the dyeing processes. Organic dyes are one of the largest pollutants released into wastewater from textile and other industrial processes, which have shown severe impacts on human physiology. Nano-structure compounds have gained importance in this category due their anticipated high surface area and improved reactive sites. In recent years several novel adsorbents have been reported to possess great adsorption potential due to their enhanced adsorptive capacity. Nano-MnO2 has great potential applications in environment protection field and has gained importance in this category because it has a wide variety of structure with large surface area. The diverse structures, chemical properties of manganese oxides are taken advantage of in potential applications such as adsorbents, sensor catalysis and it is also used for wide catalytic applications, such as degradation of dyes. In this study, adsorption of Methyl Violet (MV) dye from aqueous solutions onto MnO2 nanoparticles (MNP) has been investigated. The surface characterization of these nano particles was examined by Particle size analysis, Scanning Electron Microscopy (SEM), Fourier Transform Infrared (FTIR) spectroscopy and X-Ray Diffraction (XRD). The effects of process parameters such as initial concentration, pH, temperature and contact duration on the adsorption capacities have been evaluated, in which pH has been found to be most effective parameter among all. The data were analyzed using the Langmuir and Freundlich for explaining the equilibrium characteristics of adsorption. And kinetic models like pseudo first- order, second-order model and Elovich equation were utilized to describe the kinetic data. The experimental data were well fitted with Langmuir adsorption isotherm model and pseudo second order kinetic model. The thermodynamic parameters, such as Free energy of adsorption (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°) were also determined and evaluated.Keywords: MnO2 nanoparticles, adsorption, methyl violet, isotherm models, kinetic models, surface chemistry
Procedia PDF Downloads 258417 Mechanistic Understanding of the Difference in two Strains Cholerae Causing Pathogens and Predicting Therapeutic Strategies for Cholera Patients Affected with new Strain Vibrio Cholerae El.tor. Using Constrain-based Modelling
Authors: Faiz Khan Mohammad, Saumya Ray Chaudhari, Raghunathan Rengaswamy, Swagatika Sahoo
Abstract:
Cholera caused by pathogenic gut bacteria Vibrio Cholerae (VC), is a major health problem in developing countries. Different strains of VC exhibit variable responses subject to different extracellular medium (Nag et al, Infect Immun, 2018). In this study, we present a new approach to model the variable VC responses in mono- and co-cultures, subject to continuously changing growth medium, which is otherwise difficult via simple FBA model. Nine VC strain and seven E. coli (EC) models were assembled and considered. A continuously changing medium is modelled using a new iterative-based controlled medium technique (ITC). The medium is appropriately prefixed with the VC model secretome. As the flux through the bacteria biomass increases secretes certain by-products. These products shall add-on to the medium, either deviating the nutrient potential or block certain metabolic components of the model, effectively forming a controlled feed-back loop. Different VC models were setup as monoculture of VC in glucose enriched medium, and in co-culture with VC strains and EC. Constrained to glucose enriched medium, (i) VC_Classical model resulted in higher flux through acidic secretome suggesting a pH change of the medium, leading to lowering of its biomass. This is in consonance with the literature reports. (ii) When compared for neutral secretome, flux through acetoin exchange was higher in VC_El tor than the classical models, suggesting El tor requires an acidic partner to lower its biomass. (iii) Seven of nine VC models predicted 3-methyl-2-Oxovaleric acid, mysirtic acid, folic acid, and acetate significantly affect corresponding biomass reactions. (iv) V. parhemolyticus and vulnificus were found to be phenotypically similar to VC Classical strain, across the nine VC strains. The work addresses the advantage of the ITC over regular flux balance analysis for modelling varying growth medium. Future expansion to co-cultures, potentiates the identification of novel interacting partners as effective cholera therapeutics.Keywords: cholera, vibrio cholera El. tor, vibrio cholera classical, acetate
Procedia PDF Downloads 162416 Smart Books as a Supporting Tool for Developing Skills of Designing and Employing Webquest 2.0
Authors: Huda Alyami
Abstract:
The present study aims to measure the effectiveness of an "Interactive eBook" in order to develop skills of designing and employing webquests for female intern teachers. The study uses descriptive analytical methodology as well as quasi-experimental methodology. The sample of the study consists of (30) female intern teachers from the Department of Special Education (in the tracks of Gifted Education and Learning Difficulties), during the first semester of the academic year 2015, at King Abdul-Aziz University in Jeddah city. The sample is divided into (15) female intern teachers for the experimental group, and (15) female intern teachers for the control group. A set of qualitative and quantitative tools have been prepared and verified for the study, embodied in: a list of the designing webquests' skills, a list of the employing webquests' skills, a webquests' knowledge achievement test, a product rating card, an observation card, and an interactive ebook. The study concludes the following results: 1. After pre-control, there are statistically significant differences, at the significance level of (α ≤ 0.05), between the mean scores of the experimental and the control groups in the post measurement of the webquests' knowledge achievement test, in favor of the experimental group. 2. There are statistically significant differences, at the significance level of (α ≤ 0.05), between the mean scores of experimental and control groups in the post measurement of the product rating card in favor of the experimental group. 3. There are statistically significant differences, at the significance level of (α ≤ 0.05), between the mean scores of experimental and control groups in the post measurement of the observation card for the experimental group. In the light of the previous findings, the study recommends the following: taking advantage of interactive ebooks when teaching all educational courses for various disciplines at the university level, creating educational participative platforms to share educational interactive ebooks for various disciplines at the local and regional levels. The study suggests conducting further qualitative studies on the effectiveness of interactive ebooks, in addition to conducting studies on the use of (Web 2.0) in webquests.Keywords: interactive eBook, webquest, design, employing, develop skills
Procedia PDF Downloads 183415 End-Users Tools to Empower and Raise Awareness of Behavioural Change towards Energy Efficiency
Authors: G. Calleja-Rodriguez, N. Jimenez-Redondo, J. J. Peralta Escalante
Abstract:
This research work aims at developing a solution to take advantage of the potential energy saving related to occupants behaviour estimated in between 5-30 % according to existing studies. For that purpose, the following methodology has been followed: 1) literature review and gap analysis, 2) define concept and functional requirements, 3) evaluation and feedback by experts. As result, the concept for a tool-box that implements continuous behavior change interventions named as engagement methods and based on increasing energy literacy, increasing energy visibility, using bonus system, etc. has been defined. These engagement methods are deployed through a set of ICT tools: Building Automation and Control System (BACS) add-ons services installed in buildings and Users Apps installed in smartphones, smart-TVs or dashboards. The tool-box called eTEACHER identifies energy conservation measures (ECM) based on energy behavioral change through a what-if analysis that collects information about the building and its users (comfort feedback, behavior, etc.) and carry out cost-effective calculations to provide outputs such us efficient control settings of building systems. This information is processed and showed in an attractive way as tailored advice to the energy end-users. Therefore, eTEACHER goal is to change the behavior of building´s energy users towards energy efficiency, comfort and better health conditions by deploying customized ICT-based interventions taking into account building typology (schools, residential, offices, health care centres, etc.), users profile (occupants, owners, facility managers, employers, etc.) as well as cultural and demographic factors. One of the main findings of this work is the common failure when technological interventions on behavioural change are done to not consult, train and support users regarding technological changes leading to poor performance in practices. As conclusion, a strong need to carry out social studies to identify relevant behavioural issues and to identify effective pro-evironmental behavioral change strategies has been identified.Keywords: energy saving, behavioral bhange, building users, engagement methods, energy conservation measures
Procedia PDF Downloads 170414 Repeatable Surface Enhanced Raman Spectroscopy Substrates from SERSitive for Wide Range of Chemical and Biological Substances
Authors: Monika Ksiezopolska-Gocalska, Pawel Albrycht, Robert Holyst
Abstract:
Surface Enhanced Raman Spectroscopy (SERS) is a technique used to analyze very low concentrations of substances in solutions, even in aqueous solutions - which is its advantage over IR. This technique can be used in the pharmacy (to check the purity of products); forensics (whether at a crime scene there were any illegal substances); or medicine (serving as a medical test) and lots more. Due to the high potential of this technique, its increasing popularity in analytical laboratories, and simultaneously - the absence of appropriate platforms enhancing the SERS signal (crucial to observe the Raman effect at low analyte concentration in solutions (1 ppm)), we decided to invent our own SERS platforms. As an enhancing layer, we have chosen gold and silver nanoparticles, because these two have the best SERS properties, and each has an affinity for the other kind of particles, which increases the range of research capabilities. The next step was to commercialize them, which resulted in the creation of the company ‘SERSitive.eu’ focusing on production of highly sensitive (Ef = 10⁵ – 10⁶), homogeneous and reproducible (70 - 80%) substrates. SERStive SERS substrates are made using the electrodeposition of silver or silver-gold nanoparticles technique. Thanks to a very detailed analysis of data based on studies optimizing such parameters as deposition time, temperature of the reaction solution, applied potential, used reducer, or reagent concentrations using a standardized compound - p-mercaptobenzoic acid (PMBA) at a concentration of 10⁻⁶ M, we have developed a high-performance process for depositing precious metal nanoparticles on the surface of ITO glass. In order to check a quality of the SERSitive platforms, we examined the wide range of the chemical compounds and the biological substances. Apart from analytes that have great affinity to the metal surfaces (e.g. PMBA) we obtained very good results for those fitting less the SERS measurements. Successfully we received intensive, and what’s more important - very repetitive spectra for; amino acids (phenyloalanine, 10⁻³ M), drugs (amphetamine, 10⁻⁴ M), designer drugs (cathinone derivatives, 10⁻³ M), medicines and ending with bacteria (Listeria, Salmonella, Escherichia coli) and fungi.Keywords: nanoparticles, Raman spectroscopy, SERS, SERS applications, SERS substrates, SERSitive
Procedia PDF Downloads 151413 Knowledge Loss Risk Assessment for Departing Employees: An Exploratory Study
Authors: Muhammad Saleem Ullah Khan Sumbal, Eric Tsui, Ricky Cheong, Eric See To
Abstract:
Organizations are posed to a threat of valuable knowledge loss when employees leave either due to retirement, resignation, job change or because of disabilities e.g. death, etc. Due to changing economic conditions, globalization, and aging workforce, organizations are facing challenges regarding retention of valuable knowledge. On the one hand, large number of employees are going to retire in the organizations whereas on the other hand, younger generation does not want to work in a company for a long time and there is an increasing trend of frequent job change among the new generation. Because of these factors, organizations need to make sure that they capture the knowledge of employee before (s)he walks out of the door. The first step in this process is to know what type of knowledge employee possesses and whether this knowledge is important for the organization. Researchers reveal in the literature that despite the serious consequences of knowledge loss in terms of organizational productivity and competitive advantage, there has not been much work done in the area of knowledge loss assessment of departing employees. An important step in the knowledge retention process is to determine the critical ‘at risk’ knowledge. Thus, knowledge loss risk assessment is a process by which organizations can gauge the importance of knowledge of the departing employee. The purpose of this study is to explore this topic of knowledge loss risk assessment by conducting a qualitative study in oil and gas sector. By engaging in dialogues with managers and executives of the organizations through in-depth interviews and adopting a grounded methodology approach, the research will explore; i) Are there any measures adopted by organizations to assess the risk of knowledge loss from departing employees? ii) Which factors are crucial for knowledge loss assessment in the organizations? iii) How can we prioritize the employees for knowledge retention according to their criticality? Grounded theory approach is used when there is not much knowledge available in the area under research and thus new knowledge is generated about the topic through an in-depth exploration of the topic by using methods such as interviews and using a systematic approach to analyze the data. The outcome of the study will generate a model for the risk of knowledge loss through factors such as the likelihood of knowledge loss, the consequence/impact of knowledge loss and quality of the knowledge loss of departing employees. Initial results show that knowledge loss assessment is quite crucial for the organizations and it helps in determining what types of knowledge employees possess e.g. organizations knowledge, subject matter expertise or relationships knowledge. Based on that, it can be assessed which employee is more important for the organizations and how to prioritize the knowledge retention process for departing employees.Keywords: knowledge loss, risk assessment, departing employees, Hong Kong organizations
Procedia PDF Downloads 408412 Development of Electrochemical Biosensor Based on Dendrimer-Magnetic Nanoparticles for Detection of Alpha-Fetoprotein
Authors: Priyal Chikhaliwala, Sudeshna Chandra
Abstract:
Liver cancer is one of the most common malignant tumors with poor prognosis. This is because liver cancer does not exhibit any symptoms in early stage of disease. Increased serum level of AFP is clinically considered as a diagnostic marker for liver malignancy. The present diagnostic modalities include various types of immunoassays, radiological studies, and biopsy. However, these tests undergo slow response times, require significant sample volumes, achieve limited sensitivity and ultimately become expensive and burdensome to patients. Considering all these aspects, electrochemical biosensors based on dendrimer-magnetic nanoparticles (MNPs) was designed. Dendrimers are novel nano-sized, three-dimensional molecules with monodispersed structures. Poly-amidoamine (PAMAM) dendrimers with eight –NH₂ groups using ethylenediamine as a core molecule were synthesized using Michael addition reaction. Dendrimers provide added the advantage of not only stabilizing Fe₃O₄ NPs but also displays capability of performing multiple electron redox events and binding multiple biological ligands to its dendritic end-surface. Fe₃O₄ NPs due to its superparamagnetic behavior can be exploited for magneto-separation process. Fe₃O₄ NPs were stabilized with PAMAM dendrimer by in situ co-precipitation method. The surface coating was examined by FT-IR, XRD, VSM, and TGA analysis. Electrochemical behavior and kinetic studies were evaluated using CV which revealed that the dendrimer-Fe₃O₄ NPs can be looked upon as electrochemically active materials. Electrochemical immunosensor was designed by immobilizing anti-AFP onto dendrimer-MNPs by gluteraldehyde conjugation reaction. The bioconjugates were then incubated with AFP antigen. The immunosensor was characterized electrochemically indicating successful immuno-binding events. The binding events were also further studied using magnetic particle imaging (MPI) which is a novel imaging modality in which Fe₃O₄ NPs are used as tracer molecules with positive contrast. Multicolor MPI was able to clearly localize AFP antigen and antibody and its binding successfully. Results demonstrate immense potential in terms of biosensing and enabling MPI of AFP in clinical diagnosis.Keywords: alpha-fetoprotein, dendrimers, electrochemical biosensors, magnetic nanoparticles
Procedia PDF Downloads 136411 Departing beyond the Orthodoxy: An Integrative Review and Future Research Avenues of Human Capital Resources Theory
Authors: Long Zhang, Ian Hampson, Loretta O' Donnell
Abstract:
Practitioners in various industries, especially in the finance industry that conventionally benefit from financial capital and resources, appear to be increasingly aware of the importance of human capital resources (HCR) after the 2008 Global Financial Crisis. Scholars from diverse fields have conducted extensive and fruitful research on HCR within their own disciplines. This review suggests that the mainstream of pure quantitative research alone is insufficient to provide precise or comprehensive understanding of HCR. The complex relationships and interactions in HCR call for more integrative and cross-disciplinary research to more holistically understand complex and intricate HCRs. The complex nature of HCR requires deep qualitative exploration based on in-depth data to capture the everydayness of organizational activities and to register its individuality and variety. Despite previous efforts, a systematic and holistic integration of HCR research among multiple disciplines is lacking. Using a retrospective analysis of articles published in the field of economics, finance and management, including psychology, human resources management (HRM), organizational behaviour (OB), industrial and organizational psychology (I-O psychology), organizational theory, and strategy literatures, this study summaries and compares the major perspectives, theories, and findings on HCR research. A careful examination of the progress of the debates of HCR definitions and measurements in distinct disciplines enables an identification of the limitations and gaps in existing research. It enables an analysis of the interplay of these concepts, as well as that of the related concepts of intellectual capital, social capital, and Chinese guanxi, and how they provide a broader perspective on the HCR-related influences on firms’ competitive advantage. The study also introduces the themes of Environmental, Social and Governance, or ESG based investing, as the burgeoning body of ESG studies illustrates the rising importance of human and non-financial capital in investment process. The ESG literature locates HCR into a broader research context of the value of non-financial capital in explaining firm performance. The study concludes with a discussion of new directions for future research that may help advance our knowledge of HCR.Keywords: human capital resources, social capital, Chinese guanxi, human resources management
Procedia PDF Downloads 359410 The Problematic Transfer of Classroom Creativity in Business to the Workplace
Authors: Kym Drady
Abstract:
This paper considers whether creativity is the missing link which would allow the evolution of organisational behaviour and profitability if it was ‘released’. It suggests that although many organisations try to engage their workforce and expect innovation they fail to provide the means for its achievement. The paper suggests that creative thinking is the ‘glue’ which links organisational performance to profitability. A key role of a university today, is to produce skilled and capable graduates. Increasing competition and internationalisation has meant that the employability agenda has never been more prominent within the field of education. As such it should be a key consideration when designing and developing a curriculum. It has been suggested that creativity is a valuable personal skill and perhaps should be the focus of an organisations business strategy in order for them to increase their competitive advantage in the twenty first century. Flexible and agile graduates are now required to become creative in their use of skills and resources in an increasingly complex and sophisticated global market. The paper, therefore, questions that if this is the case why then does creativity fail to appear as a key curriculum subject in many business schools. It also considers why policy makers continue to neglect this critical issue when it could offer the ‘key’ to economic prosperity. Recent literature does go some way to addressing by suggesting that small clusters of UK Universities have started including some creativity in their PDP work. However, this paper builds on this work and proposes that that creativity should become a central component of the curriculum. The paper suggests that creativity should appear in every area of the curriculum and that it should act as the link that connects productivity to profitability rather than being marginalised as an additional part of the curriculum. A range of data gathering methods have been used but each has been drawn from a qualitative base as it was felt that due to nature of the study individual’s thoughts and feelings needed to be examined and reflection was important. The author also recognises the importance of her own reflection both on the experiences of the students and their later working experiences as well as on the creative elements within the programme that she delivered. This paper has been drawn from research undertaken by the author in relation to her PhD study which explores the potential benefits of including creativity in the curriculum within business schools and the added value this could make to their employability. To conclude, creativity is, in the opinion of the author, the missing link to organisational profitability and as such should be prioritised especially by higher education providers.Keywords: business curriculum, business curriculum, higher education, creative thinking and problem-solving, creativity
Procedia PDF Downloads 274409 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 331408 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 118407 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm
Authors: Zachary Huffman, Joana Rocha
Abstract:
Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations
Procedia PDF Downloads 135