Search results for: energy performance gaps
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19496

Search results for: energy performance gaps

10406 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: airborne laser scanning, digital terrain models, filtering, forested areas

Procedia PDF Downloads 131
10405 Using Data Mining in Automotive Safety

Authors: Carine Cridelich, Pablo Juesas Cano, Emmanuel Ramasso, Noureddine Zerhouni, Bernd Weiler

Abstract:

Safety is one of the most important considerations when buying a new car. While active safety aims at avoiding accidents, passive safety systems such as airbags and seat belts protect the occupant in case of an accident. In addition to legal regulations, organizations like Euro NCAP provide consumers with an independent assessment of the safety performance of cars and drive the development of safety systems in automobile industry. Those ratings are mainly based on injury assessment reference values derived from physical parameters measured in dummies during a car crash test. The components and sub-systems of a safety system are designed to achieve the required restraint performance. Sled tests and other types of tests are then carried out by car makers and their suppliers to confirm the protection level of the safety system. A Knowledge Discovery in Databases (KDD) process is proposed in order to minimize the number of tests. The KDD process is based on the data emerging from sled tests according to Euro NCAP specifications. About 30 parameters of the passive safety systems from different data sources (crash data, dummy protocol) are first analysed together with experts opinions. A procedure is proposed to manage missing data and validated on real data sets. Finally, a procedure is developed to estimate a set of rough initial parameters of the passive system before testing aiming at reducing the number of tests.

Keywords: KDD process, passive safety systems, sled test, dummy injury assessment reference values, frontal impact

Procedia PDF Downloads 370
10404 Epigenetic and Archeology: A Quest to Re-Read Humanity

Authors: Salma A. Mahmoud

Abstract:

Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.

Keywords: epigenetics, archeology, bones, chromatin, methylome

Procedia PDF Downloads 97
10403 Membrane Distillation Process Modeling: Dynamical Approach

Authors: Fadi Eleiwi, Taous Meriem Laleg-Kirati

Abstract:

This paper presents a complete dynamic modeling of a membrane distillation process. The model contains two consistent dynamic models. A 2D advection-diffusion equation for modeling the whole process and a modified heat equation for modeling the membrane itself. The complete model describes the temperature diffusion phenomenon across the feed, membrane, permeate containers and boundary layers of the membrane. It gives an online and complete temperature profile for each point in the domain. It explains heat conduction and convection mechanisms that take place inside the process in terms of mathematical parameters, and justify process behavior during transient and steady state phases. The process is monitored for any sudden change in the performance at any instance of time. In addition, it assists maintaining production rates as desired, and gives recommendations during membrane fabrication stages. System performance and parameters can be optimized and controlled using this complete dynamic model. Evolution of membrane boundary temperature with time, vapor mass transfer along the process, and temperature difference between membrane boundary layers are depicted and included. Simulations were performed over the complete model with real membrane specifications. The plots show consistency between 2D advection-diffusion model and the expected behavior of the systems as well as literature. Evolution of heat inside the membrane starting from transient response till reaching steady state response for fixed and varying times is illustrated.

Keywords: membrane distillation, dynamical modeling, advection-diffusion equation, thermal equilibrium, heat equation

Procedia PDF Downloads 256
10402 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 178
10401 Load Comparison between Different Positions during Elite Male Basketball Games: A Sport Metabolomics Approach

Authors: Kayvan Khoramipour, Abbas Ali Gaeini, Elham Shirzad, Øyvind Sandbakk

Abstract:

Basketball has different positions with individual movement profiles, which may influence metabolic demands. Accordingly, the present study aimed to compare the movement and metabolic load between different positions during elite male basketball games. Five main players of 14 teams (n = 70), who participated in the 2017-18 Iranian national basketball leagues, were selected as participants. The players were defined as backcourt (Posts 1-3) and frontcourt (Posts 4-5). Video based time motion analysis (VBTMA) was performed based on players’ individual running and shuffling speed using Dartfish software. Movements were classified into high and low intensity running with and without having the ball, as well as high and low-intensity shuffling and static movements. Mean frequency, duration, and distance were calculated for each class, except for static movements where only frequency was calculated. Saliva samples were collected from each player before and after 40-minute basketball games and analyzed using metabolomics. Principal component analysis (PCA) and Partial least square discriminant analysis (PLSDA) (for metabolomics data) and independent T-tests (for VBTMA) were used as statistical tests. Movement frequency, duration, and distance were higher in backcourt players (all p ≤ 0.05), while static movement frequency did not differ. Saliva samples showed that the levels of Taurine, Succinic acid, Citric acid, Pyruvate, Glycerol, Acetoacetic acid, Acetone, and Hypoxanthine were all higher in backcourt players, whereas Lactate, Alanine, 3-Metyl Histidine, and Methionine were higher in frontcourt players Based on metabolomics, we demonstrate that backcourt and frontcourt players have different metabolic profiles during games, where backcourt players move clearly more during games and therefore rely more on aerobic energy, whereas frontcourt players rely more on anaerobic energy systems in line with less dynamic but more static movement patterns.

Keywords: basketball, metabolomics, saliva, sport loadomics

Procedia PDF Downloads 102
10400 Off-Body Sub-GHz Wireless Channel Characterization for Dairy Cows in Barns

Authors: Said Benaissa, David Plets, Emmeric Tanghe, Jens Trogh, Luc Martens, Leen Vandaele, Annelies Van Nuffel, Frank A. M. Tuyttens, Bart Sonck, Wout Joseph

Abstract:

The herd monitoring and managing - in particular the detection of ‘attention animals’ that require care, treatment or assistance is crucial for effective reproduction status, health, and overall well-being of dairy cows. In large sized farms, traditional methods based on direct observation or analysis of video recordings become labour-intensive and time-consuming. Thus, automatic monitoring systems using sensors have become increasingly important to continuously and accurately track the health status of dairy cows. Wireless sensor networks (WSNs) and internet-of-things (IoT) can be effectively used in health tracking of dairy cows to facilitate herd management and enhance the cow welfare. Since on-cow measuring devices are energy-constrained, a proper characterization of the off-body wireless channel between the on-cow sensor nodes and the back-end base station is required for a power-optimized deployment of these networks in barns. The aim of this study was to characterize the off-body wireless channel in indoor (barns) environment at 868 MHz using LoRa nodes. LoRa is an emerging wireless technology mainly targeted at WSNs and IoT networks. Both large scale fading (i.e., path loss) and temporal fading were investigated. The obtained path loss values as a function of the transmitter-receiver separation were well fitted by a lognormal path loss model. The path loss showed an additional increase of 4 dB when the wireless node was actually worn by the cow. The temporal fading due to movement of other cows was well described by Rician distributions with a K-factor of 8.5 dB. Based on this characterization, network planning and energy consumption optimization of the on-body wireless nodes could be performed, which enables the deployment of reliable dairy cow monitoring systems.

Keywords: channel, channel modelling, cow monitoring, dairy cows, health monitoring, IoT, LoRa, off-body propagation, PLF, propagation

Procedia PDF Downloads 306
10399 Inducing Flow Experience in Mobile Learning: An Experiment Using a Spanish Learning Mobile Application

Authors: S. Jonsson, D. Millard, C. Bokhove

Abstract:

Smartphones are ubiquitous and frequently used as learning tools, which makes the design of educational apps an important area of research. A key issue is designing apps to encourage engagement while maintaining a focus on the educational aspects of the app. Flow experience is a promising method for addressing this issue, which refers to a mental state of cognitive absorption and positive emotion. Flow experience has been shown to be associated with positive emotion and increased learning performance. Studies have shown that immediate feedback is an antecedent to Flow. This experiment investigates the effect of immediate feedback on Flow experience. An app teaching Spanish phrases was developed, and 30 participants completed both a 10min session with immediate feedback and a 10min session with delayed feedback. The app contained a task where the user assembles Spanish phrases by pressing bricks with Spanish words. Immediate feedback was implemented by incorrect bricks recoiling, while correct brick moved to form part of the finished phrase. In the delayed feedback condition, the user did not know if the bricks they pressed were correct until the phrase was complete. The level of Flow experienced by the participants was measured after each session using the Flow Short Scale. The results showed that higher levels of Flow were experienced in the immediate feedback session. It was also found that 14 of the participants indicated that the demands of the task were ‘just right’ in the immediate feedback session, while only one did in the delayed feedback session. These results have implications for how to design educational technology and opens up questions for how Flow experience can be used to increase performance and engagement.

Keywords: feedback timing, flow experience, L2 language learning, mobile learning

Procedia PDF Downloads 117
10398 A Validated High-Performance Liquid Chromatography-UV Method for Determination of Malondialdehyde-Application to Study in Chronic Ciprofloxacin Treated Rats

Authors: Anil P. Dewani, Ravindra L. Bakal, Anil V. Chandewar

Abstract:

Present work demonstrates the applicability of high-performance liquid chromatography (HPLC) with UV detection for the determination of malondialdehyde as malondialdehyde-thiobarbituric acid complex (MDA-TBA) in-vivo in rats. The HPLC-UV method for MDA-TBA was achieved by isocratic mode on a reverse-phase C18 column (250mm×4.6mm) at a flow rate of 1.0mLmin−1 followed by UV detection at 278 nm. The chromatographic conditions were optimized by varying the concentration and pH followed by changes in percentage of organic phase optimal mobile phase consisted of mixture of water (0.2% Triethylamine pH adjusted to 2.3 by ortho-phosphoric acid) and acetonitrile in ratio (80:20 % v/v). The retention time of MDA-TBA complex was 3.7 min. The developed method was sensitive as limit of detection and quantification (LOD and LOQ) for MDA-TBA complex were (standard deviation and slope of calibration curve) 110 ng/ml and 363 ng/ml respectively. The method was linear for MDA spiked in plasma and subjected to derivatization at concentrations ranging from 100 to 1000 ng/ml. The precision of developed method measured in terms of relative standard deviations for intra-day and inter-day studies was 1.6–5.0% and 1.9–3.6% respectively. The HPLC method was applied for monitoring MDA levels in rats subjected to chronic treatment of ciprofloxacin (CFL) (5mg/kg/day) for 21 days. Results were compared by findings in control group rats. Mean peak areas of both study groups was subjected for statistical treatment to unpaired student t-test to find p-values. The p value was < 0.001 indicating significant results and suggesting increased MDA levels in rats subjected to chronic treatment of CFL of 21 days.

Keywords: MDA, TBA, ciprofloxacin, HPLC-UV

Procedia PDF Downloads 313
10397 Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments

Authors: Lavanya Ratnabala, Robinroy Peter, E. Y. A. Charles

Abstract:

This research paper addresses the challenges of exploration and navigation in unknown environments from an evolutionary swarm robotics perspective. Path formation plays a crucial role in enabling cooperative swarm robots to accomplish these tasks. The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals. Simulation experiments conducted in the Argos simulator demonstrate the successful formation of paths in the majority of trials. Furthermore, the paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method. To mitigate this issue, a task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication. The strategy evaluates the distance between points and determines the required number of robots for the path formation task, reducing unwanted exploration and traffic congestion. The performance of the sub-goal-based path formation and task allocation strategy is evaluated by comparing path length, time, and resource reduction against the A* algorithm. The simulation experiments demonstrate promising results, showcasing the scalability, robustness, and fault tolerance characteristics of the proposed approach.

Keywords: swarm, path formation, task allocation, Argos, exploration, navigation, sub-goal

Procedia PDF Downloads 32
10396 Effects of Land Certification in Securing Women’s Land Rights: The Case of Oromia Regional State, Central Ethiopia

Authors: Mesfin Nigussie Ibido

Abstract:

The study is designed to explore the effects of land certification in securing women’s land rights of two rural villages in Robe district at Arsi Zone of Oromia regional state. The land is very critical assets for human life survival and the backbone for rural women livelihood. Equal access and control power to the land have given a chance for rural women to participate in different economic activities and improve their bargaining ability for decision making on their rights. Unfortunately, women were discriminated and marginalized from access and control of land for centuries through customary practices. However, in many countries, legal reform is used as a powerful tool for eliminating discriminatory provisions in property rights. Among other equity and efficiency concerns, the land certification program in Ethiopia attempts to address gender bias concerns of the current land-tenure system. The existed rural land policy was recognizing a women land rights and benefited by strengthened wives awareness of their land rights and contribute to the strong involvement of wives in decision making. However, harmful practices and policy implementation problems still against women do not fully exercise a provision of land rights in a different area of the country. Thus, this study is carried out to examine the effect of land certification in securing women’s land rights by eliminating the discriminatory nature of cultural abuses of study areas. Probability and non-probability sampling types were used, and the sample size was determined by using the sampling distribution of the proportion method. Systematic random sampling method was applied by taking the nth element of the sample frame. Both quantitative and qualitative research methods were applied, and survey respondents of 192 households were conducted and administering questionnaires in the quantitative method. The qualitative method was applied by interviews with focus group discussions with rural women, case stories, Village, and relevant district offices. Triangulation method was applied in data collection, data presentation and in the analysis of findings. Study finding revealed that the existence of land certification is affected by rural women positively by advancing their land rights, but still, some women are challenged by unsolved problems in the study areas. The study forwards recommendation on the existed problems or gaps to ensure women’s equal access to and control over land in the study areas.

Keywords: decision making, effects, land certification, land right, tenure security

Procedia PDF Downloads 190
10395 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 157
10394 Producing Sustained Renewable Energy and Removing Organic Pollutants from Distillery Wastewater using Consortium of Sludge Microbes

Authors: Anubha Kaushik, Raman Preet

Abstract:

Distillery wastewater in the form of spent wash is a complex and strong industrial effluent, with high load of organic pollutants that may deplete dissolved oxygen on being discharged into aquatic systems and contaminate groundwater by leaching of pollutants, while untreated spent wash disposed on land acidifies the soil. Stringent legislative measures have therefore been framed in different countries for discharge standards of distillery effluent. Utilising the organic pollutants present in various types of wastes as food by mixed microbial populations is emerging as an eco-friendly approach in the recent years, in which complex organic matter is converted into simpler forms, and simultaneously useful gases are produced as renewable and clean energy sources. In the present study, wastewater from a rice bran based distillery has been used as the substrate in a dark fermenter, and native microbial consortium from the digester sludge has been used as the inoculum to treat the wastewater and produce hydrogen. After optimising the operational conditions in batch reactors, sequential batch mode and continuous flow stirred tank reactors were used to study the best operational conditions for enhanced and sustained hydrogen production and removal of pollutants. Since the rate of hydrogen production by the microbial consortium during dark fermentation is influenced by concentration of organic matter, pH and temperature, these operational conditions were optimised in batch mode studies. Maximum hydrogen production rate (347.87ml/L/d) was attained in 32h dark fermentation while a good proportion of COD also got removed from the wastewater. Slightly acidic initial pH seemed to favor biohydrogen production. In continuous stirred tank reactor, high H2 production from distillery wastewater was obtained from a relatively shorter substrate retention time (SRT) of 48h and a moderate organic loading rate (OLR) of 172 g/l/d COD.

Keywords: distillery wastewater, hydrogen, microbial consortium, organic pollution, sludge

Procedia PDF Downloads 270
10393 Efficiency and Scale Elasticity in Network Data Envelopment Analysis: An Application to International Tourist Hotels in Taiwan

Authors: Li-Hsueh Chen

Abstract:

Efficient operation is more and more important for managers of hotels. Unlike the manufacturing industry, hotels cannot store their products. In addition, many hotels provide room service, and food and beverage service simultaneously. When efficiencies of hotels are evaluated, the internal structure should be considered. Hence, based on the operational characteristics of hotels, this study proposes a DEA model to simultaneously assess the efficiencies among the room production division, food and beverage production division, room service division and food and beverage service division. However, not only the enhancement of efficiency but also the adjustment of scale can improve the performance. In terms of the adjustment of scale, scale elasticity or returns to scale can help to managers to make decisions concerning expansion or contraction. In order to construct a reasonable approach to measure the efficiencies and scale elasticities of hotels, this study builds an alternative variable-returns-to-scale-based two-stage network DEA model with the combination of parallel and series structures to explore the scale elasticities of the whole system, room production division, food and beverage production division, room service division and food and beverage service division based on the data of international tourist hotel industry in Taiwan. The results may provide valuable information on operational performance and scale for managers and decision makers.

Keywords: efficiency, scale elasticity, network data envelopment analysis, international tourist hotel

Procedia PDF Downloads 215
10392 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients

Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming

Abstract:

Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.

Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry

Procedia PDF Downloads 280
10391 Cupric Oxide Thin Films for Optoelectronic Application

Authors: Sanjay Kumar, Dinesh Pathak, Sudhir Saralch

Abstract:

Copper oxide is a semiconductor that has been studied for several reasons such as the natural abundance of starting material copper (Cu); the easiness of production by Cu oxidation; their non-toxic nature and the reasonably good electrical and optical properties. Copper oxide is well-known as cuprite oxide. The cuprite is p-type semiconductors having band gap energy of 1.21 to 1.51 eV. As a p-type semiconductor, conduction arises from the presence of holes in the valence band (VB) due to doping/annealing. CuO is attractive as a selective solar absorber since it has high solar absorbency and a low thermal emittance. CuO is very promising candidate for solar cell applications as it is a suitable material for photovoltaic energy conversion. It has been demonstrated that the dip technique can be used to deposit CuO films in a simple manner using metallic chlorides (CuCl₂.2H₂O) as a starting material. Copper oxide films are prepared using a methanolic solution of cupric chloride (CuCl₂.2H₂O) at three baking temperatures. We made three samples, after heating which converts to black colour. XRD data confirm that the films are of CuO phases at a particular temperature. The optical band gap of the CuO films calculated from optical absorption measurements is 1.90 eV which is quite comparable to the reported value. Dip technique is a very simple and low-cost method, which requires no sophisticated specialized setup. Coating of the substrate with a large surface area can be easily obtained by this technique compared to that in physical evaporation techniques and spray pyrolysis. Another advantage of the dip technique is that it is very easy to coat both sides of the substrate instead of only one and to deposit otherwise inaccessible surfaces. This method is well suited for applying coating on the inner and outer surfaces of tubes of various diameters and shapes. The main advantage of the dip coating method lies in the fact that it is possible to deposit a variety of layers having good homogeneity and mechanical and chemical stability with a very simple setup. In this paper, the CuO thin films preparation by dip coating method and their characterization will be presented.

Keywords: absorber material, cupric oxide, dip coating, thin film

Procedia PDF Downloads 299
10390 The Effect of Perceived Environmental Uncertainty on Corporate Entrepreneurship Performance: A Field Study in a Large Industrial Zone in Turkey

Authors: Adem Öğüt, M. Tahir Demirsel

Abstract:

Rapid changes and developments today, besides the opportunities and facilities they offer to the organization, may also be a source of danger and difficulties due to the uncertainty. In order to take advantage of opportunities and to take the necessary measures against possible uncertainties, organizations must always follow the changes and developments that occur in the business environment and develop flexible structures and strategies for the alternative cases. Perceived environmental uncertainty is an outcome of managers’ perceptions of the combined complexity, instability and unpredictability in the organizational environment. An environment that is perceived to be complex, changing rapidly, and difficult to predict creates high levels of uncertainty about the appropriate organizational responses to external circumstances. In an uncertain and complex environment, organizations experiencing cutthroat competition may be successful by developing their corporate entrepreneurial ability. Corporate entrepreneurship is a process that includes many elements such as innovation, creating new business, renewal, risk-taking and being predictive. Successful corporate entrepreneurship is a critical factor which has a significant contribution to gain a sustainable competitive advantage, to renew the organization and to adapt the environment. In this context, the objective of this study is to investigate the effect of perceived environmental uncertainty of managers on corporate entrepreneurship performance. The research was conducted on 222 business executives in one of the major industrial zones of Turkey, Konya Organized Industrial Zone (KOS). According to the results, it has been observed that there is a positive statistically significant relationship between perceived environmental uncertainty and corporate entrepreneurial activities.

Keywords: corporate entrepreneurship, entrepreneurship, industrial zone, perceived environmental uncertainty, uncertainty

Procedia PDF Downloads 304
10389 The Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster: A Qualitative study

Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon

Abstract:

In a disaster event, sharing patient information between the pre-hospital Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre- EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors that are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality, and the data were analyzed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system that can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analyzed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospital staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.

Keywords: emergency medical teams, communication, information and communication technologies, disaster

Procedia PDF Downloads 116
10388 The Effect of Different Exercise Intensities on Plasma Endostatin in Healthy Volunteers

Authors: Inayat Shah, Muhammad Omar Malik, Ghareeb Alshuwaier, Ronald H. Baxendale

Abstract:

Background: The balance between angiogenesis and angiostasis is important in growth and developmental processes in the body. Angiogenic and angiostatic mediators control this balance. Endostatin is one of the prominent angiostatic mediators. The marked angiostatic effect of endostatin includes inhibiting endothelial cell migration, proliferation and apoptosis. Physical activity decreases the risk and development of many angiogenesis related health problems including atherosclerosis and numerous cancers. Physiological influences of different physical activities on plasma endostatin concentration are controversial and not completely clear. Moreover, correlation of physical characteristics and metabolic predictors during physical activity on circulating endostatin is indistinct and poorly speculated. The study aimed to determine the effects of mild, moderate and vigorous exercise on the concentration of endostatin in plasma. Methodology: 22 participants, 16 males (age = 30.6 ± 7.8 years) and 6 females (age = 26.5 ± 5 years) were recruited. Weekly session of different intensities exercise based on the predicted maximum heart of the participants [60%(low), 70% (moderate) and 80% (vigorous)] were carried out. The duration and work rate for each participant was determined through sub-maximal exercise. Standardization of the session was done on total energy expenditure of the participants per session. One pre exercise and two post exercise samples were taken at intervals of 10 and 60 minutes. Results: Pre-exercise mean endostatin was 101 ± 20 ng/dl. Low intensity exercise insignificantly decreased the endostatin concentration in plasma at 10 and 60 minutes 97 ± 20 ng/dl (p= 0.5), 98 ± 23 ng/dl (p= 0.8)). However, moderate (p= 0.022, 0.004) and vigorous intensities (p ≤ 0.001, 0.02) increased the endostatin concentrations significantly at both 10 and 60 minutes intervals respectively. The effects were not significantly influenced by gender, exercise mode (walking vs. running), components of exercise (HR, Speed, Gradients, distance, duration) or metabolism during exercise (VO₂ max, VCO₂, RER, energy expenditure, rate of carbohydrate or fats oxidation). Conclusion: Low intensity exercises did not influence endostatin concentration. However, moderate to high intensity exercises significantly increase endostatin concentration and may have potential benefits.

Keywords: angiogenesis, exercise, endostatin, physical activity

Procedia PDF Downloads 214
10387 Bi-Liquid Free Surface Flow Simulation of Liquid Atomization for Bi-Propellant Thrusters

Authors: Junya Kouwa, Shinsuke Matsuno, Chihiro Inoue, Takehiro Himeno, Toshinori Watanabe

Abstract:

Bi-propellant thrusters use impinging jet atomization to atomize liquid fuel and oxidizer. Atomized propellants are mixed and combusted due to auto-ignitions. Therefore, it is important for a prediction of thruster’s performance to simulate the primary atomization phenomenon; especially, the local mixture ratio can be used as indicator of thrust performance, so it is useful to evaluate it from numerical simulations. In this research, we propose a numerical method for considering bi-liquid and the mixture and install it to CIP-LSM which is a two-phase flow simulation solver with level-set and MARS method as an interfacial tracking method and can predict local mixture ratio distribution downstream from an impingement point. A new parameter, beta, which is defined as the volume fraction of one liquid in the mixed liquid within a cell is introduced and the solver calculates the advection of beta, inflow and outflow flux of beta to a cell. By validating this solver, we conducted a simple experiment and the same simulation by using the solver. From the result, the solver can predict the penetrating length of a liquid jet correctly and it is confirmed that the solver can simulate the mixing of liquids. Then we apply this solver to the numerical simulation of impinging jet atomization. From the result, the inclination angle of fan after the impingement in the bi-liquid condition reasonably agrees with the theoretical value. Also, it is seen that the mixture of liquids can be simulated in this result. Furthermore, simulation results clarify that the injecting condition affects the atomization process and local mixture ratio distribution downstream drastically.

Keywords: bi-propellant thrusters, CIP-LSM, free-surface flow simulation, impinging jet atomization

Procedia PDF Downloads 272
10386 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 143
10385 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods

Authors: Matthew D. Baffa

Abstract:

Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.

Keywords: emissivity, heat loss, infrared thermography, thermal conductance

Procedia PDF Downloads 299
10384 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 102
10383 Rising Levels of Greenhouse Gases: Implication for Global Warming in Anambra State South Eastern Nigeria

Authors: Chikwelu Edward Emenike, Ogbuagu Uchenna Fredrick

Abstract:

About 34% of the solar radiant energy reaching the earth is immediately reflected back to space as incoming radiation by clouds, chemicals, dust in the atmosphere and by the earth’s surface. Most of the remaining 66% warms the atmosphere and land. Most of the incoming solar radiation not reflect away is degraded into low-quality heat and flows into space. The rate at which this energy returns to space as low-quality heat is affected by the presence of molecules of greenhouse gases. Gaseous emission was measured with the aid of Growen gas Analyzer with a digital readout. Total measurements of eight parameters of twelve selected sample locations taken at two different seasons within two months were made. The ambient air quality investigation in Anambra State has shown the overall mean concentrations of gaseous emission at twelve (12) locations. The mean gaseous emissions showed (NO2=0.66ppm, SO2=0.30ppm, CO=43.93ppm, H2S=2.17ppm, CH4=1.27ppm, CFC=1.59ppb, CO2=316.33ppm, N2O=302.67ppb and O3=0.37ppm). These values do not conform to the National Ambient Air Quality Standard (NAAQS) and thus contribute significantly to the global warming. Because some of these gaseous emissions (SO2, NO2) are oxidizing agents, they act as irritants that damage delicate tissues in the eyes and respiratory passages. These can impair lung function and trigger cardiovascular problems as the heart tries to compensate for lack of Oxygen by pumping faster and harder. The major sources of air pollution are transportation, industrial processes, stationary fuel combustion and solid waste disposal, thus much is yet to be done in a developing country like Nigeria. Air pollution control using pollution-control equipment to reduce the major conventional pollutants, relocating people who live very close to dumpsites, processing and treatment of gases to produce electricity, heat, fuel and various chemical components should be encouraged.

Keywords: ambient air, atmosphere, greenhouse gases, anambra state

Procedia PDF Downloads 410
10382 Count of Trees in East Africa with Deep Learning

Authors: Nubwimana Rachel, Mugabowindekwe Maurice

Abstract:

Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.

Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization

Procedia PDF Downloads 49
10381 Evidence on the Nature and Extent of Fall in Oil Prices on the Financial Performance of Listed Companies: A Ratio Analysis Case Study of the Insurance Sector in the UAE

Authors: Pallavi Kishore, Mariam Aslam

Abstract:

The sharp decline in oil prices that started in 2014 affected most economies in the world either positively or negatively. In some economies, particularly the oil exporting countries, the effects were felt immediately. The Gulf Cooperation Council’s (GCC henceforth) countries are oil and gas-dependent with the largest oil reserves in the world. UAE (United Arab Emirates) has been striving to diversify away from oil and expects higher non-oil growth in 2018. These two factors, falling oil prices and the economy strategizing away from oil dependence, make a compelling case to study the financial performance of various sectors in the economy. Among other sectors, the insurance sector is widely recognized as an important indicator of the health of the economy. An expanding population, surge in construction and infrastructure, increased life expectancy, greater expenditure on automobiles and other luxury goods translate to a booming insurance sector. A slow-down of the insurance sector, on the other hand, may indicate a general slow-down in the economy. Therefore, a study on the insurance sector will help understand the general nature of the current economy. This study involves calculations and comparisons of ratios pre and post the fall in oil prices in the insurance sector in the UAE. A sample of 33 companies listed on the official stock exchanges of UAE-Dubai Financial Market and Abu Dhabi Stock Exchange were collected and empirical analysis employed to study the financial performance pre and post fall in oil prices. Ratios were calculated in 5 categories: Profitability, Liquidity, Leverage, Efficiency, and Investment. The means pre- and post-fall are compared to conclude that the profitability ratios including ROSF (Return on Shareholder Funds), ROCE (Return on Capital Employed) and NPM (Net Profit Margin) have all taken a hit. Parametric tests, including paired t-test, concludes that while the fall in profitability ratios is statistically significant, the other ratios have been quite stable in the period. The efficiency, liquidity, gearing and investment ratios have not been severely affected by the fall in oil prices. This may be due to the implementation of stronger regulatory policies and is a testimony to the diversification into the non-oil economy. The regulatory authorities can use the findings of this study to ensure transparency in revealing financial information to the public and employ policies that will help further the health of the economy. The study will also help understand which areas within the sector could benefit from more regulations.

Keywords: UAE, insurance sector, ratio analysis, oil price, profitability, liquidity, gearing, investment, efficiency

Procedia PDF Downloads 234
10380 Circular Economy Initiatives in Denmark for the Recycling of Household Plastic Wastes

Authors: Rikke Lybæk

Abstract:

This paper delves into the intricacies of recycling household plastic waste within Denmark, employing an exploratory case study methodology to shed light on the technical, strategic, and market dynamics of the plastic recycling value chain. Focusing on circular economy principles, the research identifies critical gaps and opportunities in recycling processes, particularly regarding plastic packaging waste derived from households, with a notable absence in food packaging reuse initiatives. The study uncovers the predominant practice of downcycling in the current value chain, underscoring a disconnect between the potential for high-quality plastic recycling and the market's readiness to embrace such materials. Through detailed examination of three leading companies in Denmark's plastic industry, the paper highlights the existing support for recycling initiatives, yet points to the necessity of assured quality in sorted plastics to foster broader adoption. The analysis further explores the importance of reuse strategies to complement recycling efforts, aiming to alleviate the pressure on virgin feedstock. The paper ventures into future perspectives, discussing different approaches such as biological degradation methods, watermark technology for plastic traceability, and the potential for bio-based and PtX plastics. These avenues promise not only to enhance recycling efficiency but also to contribute to a more sustainable circular economy by reducing reliance on virgin materials. Despite the challenges outlined, the research demonstrates a burgeoning market for recycled plastics within Denmark, propelled by both environmental considerations and customer demand. However, the study also calls for a more harmonized and effective waste collection and sorting system to elevate the quality and quantity of recyclable plastics. By casting a spotlight on successful case studies and potential technological advancements, the paper advocates for a multifaceted approach to plastic waste management, encompassing not only recycling but also innovative reuse and reduction strategies to foster a more sustainable future. In conclusion, this study underscores the urgent need for innovative, coordinated efforts in the recycling and management of plastic waste to move towards a more sustainable and circular economy in Denmark. It calls for the adoption of comprehensive strategies that include improving recycling technologies, enhancing waste collection systems, and fostering a market environment that values recycled materials, thereby contributing significantly to environmental sustainability goals.

Keywords: case study, circular economy, Denmark, plastic waste, sustainability, waste management

Procedia PDF Downloads 61
10379 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor

Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro

Abstract:

Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.

Keywords: control, DC motor, discrete PID, discrete state feedback

Procedia PDF Downloads 255
10378 Exploring the Contribution of Dynamic Capabilities to a Firm's Value Creation: The Role of Competitive Strategy

Authors: Mona Rashidirad, Hamid Salimian

Abstract:

Dynamic capabilities, as the most considerable capabilities of firms in the current fast-moving economy may not be sufficient for performance improvement, but their contribution to performance is undeniable. While much of the extant literature investigates the impact of dynamic capabilities on organisational performance, little attention has been devoted to understand whether and how dynamic capabilities create value. Dynamic capabilities as the mirror of competitive strategies should enable firms to search and seize new ideas, integrate and coordinate the firm’s resources and capabilities in order to create value. A careful investigation to the existing knowledge base remains us puzzled regarding the relationship among competitive strategies, dynamic capabilities and value creation. This study thus attempts to fill in this gap by empirically investigating the impact of dynamic capabilities on value creation and the mediating impact of competitive strategy on this relationship. We aim to contribute to dynamic capability view (DCV), in both theoretical and empirical senses, by exploring the impact of dynamic capabilities on firms’ value creation and whether competitive strategy can play any role in strengthening/weakening this relationship. Using a sample of 491 firms in the UK telecommunications market, the results demonstrate that dynamic sensing, learning, integrating and coordinating capabilities play a significant role in firm’s value creation, and competitive strategy mediates the impact of dynamic capabilities on value creation. Adopting DCV, this study investigates whether the value generating from dynamic capabilities depends on firms’ competitive strategy. This study argues a firm’s competitive strategy can mediate its ability to derive value from its dynamic capabilities and it explains the extent a firm’s competitive strategy may influence its value generation. The results of the dynamic capabilities-value relationships support our expectations and justify the non-financial value added of the four dynamic capability processes in a highly turbulent market, such as UK telecommunications. Our analytical findings of the relationship among dynamic capabilities, competitive strategy and value creation provide further evidence of the undeniable role of competitive strategy in deriving value from dynamic capabilities. The results reinforce the argument for the need to consider the mediating impact of organisational contextual factors, such as firm’s competitive strategy to examine how they interact with dynamic capabilities to deliver value. The findings of this study provide significant contributions to theory. Unlike some previous studies which conceptualise dynamic capabilities as a unidimensional construct, this study demonstrates the benefits of understanding the details of the link among the four types of dynamic capabilities, competitive strategy and value creation. In terms of contributions to managerial practices, this research draws attention to the importance of competitive strategy in conjunction with development and deployment of dynamic capabilities to create value. Managers are now equipped with solid empirical evidence which explains why DCV has become essential to firms in today’s business world.

Keywords: dynamic capabilities, resource based theory, value creation, competitive strategy

Procedia PDF Downloads 233
10377 Study of Clutch Cable Architecture and Its Influence in Efficiency of Mechanical Cable Release System

Authors: M. Devamanalan, K. Pothiraj, M. Sudhan

Abstract:

In competitive market like India, there is a high demand on the equal contribution on performance and its durability aspect of any system. In General vehicle has multiple sub-systems such as powertrain, BIW, Brakes, Actuations, Suspension and Seats etc., To withstand the market challenges, the contribution of each sub-system is very vital. The malfunction of any one sub system will directly have an impact on the performance of the major system which lead to dis-satisfaction to the end user. The Powertrain system consists of several sub-systems in which clutch is one of the prime sub-systems in MT vehicles which assist for smoother gear shifts with proper clutch dis-engagement and engagement. In general, most of the vehicles will have a mechanical or semi or full hydraulic clutch release system, whereas in small Commercial Vehicles (SCV) the majorly used clutch release system is mechanical cable release system due to its lesser cost and functional requirements. The major bottle neck in the cable type clutch release system is increase in pedal effort due to hysteresis increase and Gear shifting hard due to efficiency loss / cable slackness over the mileage accumulation of the vehicle. This study is to mainly focus on how the efficiency and hysteresis change over the mileage of the vehicle occurs because of the design architecture of outer and inner cable. The study involves several cable design validation results from vehicle level and rig level through the defined cable routing and test procedures. Results are compared to evaluate the suitable cable design architecture based on better efficiency and lower hysteresis parameters at initial and end of the validation.

Keywords: clutch, clutch cable, efficiency, architecture, cable routing

Procedia PDF Downloads 103