Search results for: corporate performance
8497 Vibration Analysis and Optimization Design of Ultrasonic Horn
Authors: Kuen Ming Shu, Ren Kai Ho
Abstract:
Ultrasonic horn has the functions of amplifying amplitude and reducing resonant impedance in ultrasonic system. Its primary function is to amplify deformation or velocity during vibration and focus ultrasonic energy on the small area. It is a crucial component in design of ultrasonic vibration system. There are five common design methods for ultrasonic horns: analytical method, equivalent circuit method, equal mechanical impedance, transfer matrix method, finite element method. In addition, the general optimization design process is to change the geometric parameters to improve a single performance. Therefore, in the general optimization design process, we couldn't find the relation of parameter and objective. However, a good optimization design must be able to establish the relationship between input parameters and output parameters so that the designer can choose between parameters according to different performance objectives and obtain the results of the optimization design. In this study, an ultrasonic horn provided by Maxwide Ultrasonic co., Ltd. was used as the contrast of optimized ultrasonic horn. The ANSYS finite element analysis (FEA) software was used to simulate the distribution of the horn amplitudes and the natural frequency value. The results showed that the frequency for the simulation values and actual measurement values were similar, verifying the accuracy of the simulation values. The ANSYS DesignXplorer was used to perform Response Surface optimization, which could shows the relation of parameter and objective. Therefore, this method can be used to substitute the traditional experience method or the trial-and-error method for design to reduce material costs and design cycles.Keywords: horn, natural frequency, response surface optimization, ultrasonic vibration
Procedia PDF Downloads 1178496 Virtual Team Performance: A Transactive Memory System Perspective
Authors: Belbaly Nassim
Abstract:
Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination
Procedia PDF Downloads 1748495 Tuning the Surface Roughness of Patterned Nanocellulose Films: An Alternative to Plastic Based Substrates for Circuit Priniting in High-Performance Electronics
Authors: Kunal Bhardwaj, Christine Browne
Abstract:
With the increase in global awareness of the environmental impacts of plastic-based products, there has been a massive drive to reduce our use of these products. Use of plastic-based substrates in electronic circuits has been a matter of concern recently. Plastics provide a very smooth and cheap surface for printing high-performance electronics due to their non-permeability to ink and easy mouldability. In this research, we explore the use of nano cellulose (NC) films in electronics as they provide an advantage of being 100% recyclable and eco-friendly. The main hindrance in the mass adoption of NC film as a substitute for plastic is its higher surface roughness which leads to ink penetration, and dispersion in the channels on the film. This research was conducted to tune the RMS roughness of NC films to a range where they can replace plastics in electronics(310-470nm). We studied the dependence of the surface roughness of the NC film on the following tunable aspects: 1) composition by weight of the NC suspension that is sprayed on a silicon wafer 2) the width and the depth of the channels on the silicon wafer used as a base. Various silicon wafers with channel depths ranging from 6 to 18 um and channel widths ranging from 5 to 500um were used as a base. Spray coating method for NC film production was used and two solutions namely, 1.5wt% NC and a 50-50 NC-CNC (cellulose nanocrystal) mixture in distilled water, were sprayed through a Wagner sprayer system model 117 at an angle of 90 degrees. The silicon wafer was kept on a conveyor moving at a velocity of 1.3+-0.1 cm/sec. Once the suspension was uniformly sprayed, the mould was left to dry in an oven at 50°C overnight. The images of the films were taken with the help of an optical profilometer, Olympus OLS 5000. These images were converted into a ‘.lext’ format and analyzed using Gwyddion, a data and image analysis software. Lowest measured RMS roughness of 291nm was with a 50-50 CNC-NC mixture, sprayed on a silicon wafer with a channel width of 5 µm and a channel depth of 12 µm. Surface roughness values of 320+-17nm were achieved at lower (5 to 10 µm) channel widths on a silicon wafer. This research opened the possibility of the usage of 100% recyclable NC films with an additive (50% CNC) in high-performance electronics. Possibility of using additives like Carboxymethyl Cellulose (CMC) is also being explored due to the hypothesis that CMC would reduce friction amongst fibers, which in turn would lead to better conformations amongst the NC fibers. CMC addition would thus be able to help tune the surface roughness of the NC film to an even greater extent in future.Keywords: nano cellulose films, electronic circuits, nanocrystals and surface roughness
Procedia PDF Downloads 1248494 The Impact of Hosting an On-Site Vocal Concert in Preschool on Music Inspiration and Learning Among Preschoolers
Authors: Meiying Liao, Poya Huang
Abstract:
The aesthetic domain is one of the six major domains in the Taiwanese preschool curriculum, encompassing visual arts, music, and dramatic play. Its primary objective is to cultivate children’s abilities in exploration and awareness, expression and creation, and response and appreciation. The purpose of this study was to explore the effects of hosting a vocal music concert on aesthetic inspiration and learning among preschoolers in a preschool setting. The primary research method employed was a case study focusing on a private preschool in Northern Taiwan that organized a school-wide event featuring two vocalists. The concert repertoires included children’s songs, folk songs, and arias performed in Mandarin, Hakka, English, German, and Italian. In addition to professional performances, preschool teachers actively participated by presenting a children’s song. A total of 5 classes, comprising approximately 150 preschoolers, along with 16 teachers and staff, participated in the event. Data collection methods included observation, interviews, and documents. Results indicated that both teachers and children thoroughly enjoyed the concert, with high levels of acceptance when the program was appropriately designed and hosted. Teachers reported that post-concert discussions with children revealed the latter’s ability to recall people, events, and elements observed during the performance, expressing their impressions of the most memorable segments. The concert effectively achieved the goals of the aesthetic domain, particularly in fostering response and appreciation. It also inspired preschoolers’ interest in music. Many teachers noted an increased desire for performance among preschoolers after exposure to the concert, with children imitating the performers and their expressions. Remarkably, one class extended this experience by incorporating it into the curriculum, autonomously organizing a high-quality concert in the music learning center. Parents also reported that preschoolers enthusiastically shared their concert experiences at home. In conclusion, despite being a single event, the positive responses from preschoolers towards the music performance suggest a meaningful impact. These experiences extended into the curriculum, as firsthand exposure to performances allowed teachers to deepen related topics, fostering a habit of autonomous learning in the designated learning centers.Keywords: concert, early childhood music education, aesthetic education, music develpment
Procedia PDF Downloads 498493 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data
Authors: Georgiana Onicescu, Yuqian Shen
Abstract:
Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection
Procedia PDF Downloads 1448492 Effect of Many Levels of Undegradable Protein on Performance, Blood Parameters, Colostrum Composition and Lamb Birth Weight in Pregnant Ewes
Authors: Maria Magdy Danial Riad
Abstract:
The objective of this study was to investigate the effect of different protein sources with different degradability ratios during late gestation of ewes on colostrum composition and its IgG concentration, body weight change of dams, and birth weight of their lambs. Objectives: 35 multiparous native crossbred ewes (BW= 59±2.5kg) were randomly allocated to five dietary treatments (7 ewes / treatment) for 2 months prior to lambing. Methods: Experimental diets were isonitrogenous (12.27% CP) and isocaloric (2.22 Mcal ME/kg DM). In diet I (the control), solvent extract soybeans (SESM 33% RUP of CP), II feed grade urea (FGU 31% RUP), III slow release urea (SRU 31% RUP). As sources of undegradable protein, extruded expeller SBM-EESM 40 (37% RUP) and extruded expeller SBM-EESM 60 (41% RUP) were used in groups IV and V, respectively. Results showed no significant effect on feed intake, crude protein (CP), metabolizable energy (ME), and body condition score (BCS). Ewes fed the 37% RUP diet gained more (p<0.05) weight compared with ewes fed the 31% RUP diet (5.62 vs. 2.5kg). Ewes in EESM 60 had the highest levels of fat, protein, total solid, solid not fat, and immunoglobulin and the lowest in urea N content (P< 0.05) in colostrum during the first 24hrs after lambing. Conclusions: Protein source and RUP levels in ewes’ diets had no significant effect (P< 0.05) on lambs’ birth weight and ewes' blood biochemical parameters. Increasing the RUP content of diet during late gestation resulted in an increase in colostrum constituents and its IgG level but had no effect on ewes’ performance and their lambs’ outcome.Keywords: colostrum, ewes, lambs output, pregnancy, undegradable protein
Procedia PDF Downloads 508491 An Approach towards Designing an Energy Efficient Building through Embodied Energy Assessment: A Case of Apartment Building in Composite Climate
Authors: Ambalika Ekka
Abstract:
In today’s world, the growing demand for urban built forms has resulted in the production and consumption of building materials i.e. embodied energy in building construction, leading to pollution and greenhouse gas (GHG) emissions. Therefore, new buildings will offer a unique opportunity to implement more energy efficient building without compromising on building performance of the building. Embodied energy of building materials forms major contribution to embodied energy in buildings. The paper results in an approach towards designing an energy efficient apartment building through embodied energy assessment. This paper discusses the trend of residential development in Rourkela, which includes three case studies of the contemporary houses, followed by architectural elements, number of storeys, predominant material use and plot sizes using primary data. It results in identification of predominant material used and other characteristics in urban area. Further, the embodied energy coefficients of various dominant building materials and alternative materials manufactured in Indian Industry is taken in consideration from secondary source i.e. literature study. The paper analyses the embodied energy by estimating materials and operational energy of proposed building followed by altering the specifications of the materials based on the building components i.e. walls, flooring, windows, insulation and roof through res build India software and comparison of different options is assessed with consideration of sustainable parameters. This paper results that autoclaved aerated concrete block only reaches the energy performance Index benchmark i.e. 69.35 kWh/m2 yr i.e. by saving 4% of operational energy and as embodied energy has no particular index, out of all materials it has the highest EE 23206202.43 MJ.Keywords: energy efficient, embodied energy, EPI, building materials
Procedia PDF Downloads 1978490 Evaluation of Fracture Resistance and Moisture Damage of Hot Mix Asphalt Using Plastic Coated Aggregates
Authors: Malleshappa Japagal, Srinivas Chitragar
Abstract:
The use of waste plastic in pavement is becoming important alternative worldwide for disposal of plastic as well as to improve the stability of pavement and to meet out environmental issues. However, there are still concerns on fatigue and fracture resistance of Hot Mix Asphalt with the addition of plastic waste, (HMA-Plastic mixes) and moisture damage potential. The present study was undertaken to evaluate fracture resistance of HMA-Plastic mixes using semi-circular bending (SCB) test and moisture damage potential by Indirect Tensile strength (ITS) test using retained tensile strength (TSR). In this study, a dense graded asphalt mix with 19 mm nominal maximum aggregate size was designed in the laboratory using Marshall Mix design method. Aggregates were coated with different percentages of waste plastic (0%, 2%, 3% and 4%) by weight of aggregate and performance evaluation of fracture resistance and Moisture damage was carried out. The following parameters were estimated for the mixes: J-Integral or Jc, strain energy at failure, peak load at failure, and deformation at failure. It was found that the strain energy and peak load of all the mixes decrease with an increase in notch depth, indicating that increased percentage of plastic waste gave better fracture resistance. The moisture damage potential was evaluated by Tensile strength ratio (TSR). The experimental results shown increased TRS value up to 3% addition of waste plastic in HMA mix which gives better performance hence the use of waste plastic in road construction is favorable.Keywords: hot mix asphalt, semi circular bending, marshall mix design, tensile strength ratio
Procedia PDF Downloads 3068489 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas
Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards
Abstract:
Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.Keywords: airborne laser scanning, digital terrain models, filtering, forested areas
Procedia PDF Downloads 1398488 Using Data Mining in Automotive Safety
Authors: Carine Cridelich, Pablo Juesas Cano, Emmanuel Ramasso, Noureddine Zerhouni, Bernd Weiler
Abstract:
Safety is one of the most important considerations when buying a new car. While active safety aims at avoiding accidents, passive safety systems such as airbags and seat belts protect the occupant in case of an accident. In addition to legal regulations, organizations like Euro NCAP provide consumers with an independent assessment of the safety performance of cars and drive the development of safety systems in automobile industry. Those ratings are mainly based on injury assessment reference values derived from physical parameters measured in dummies during a car crash test. The components and sub-systems of a safety system are designed to achieve the required restraint performance. Sled tests and other types of tests are then carried out by car makers and their suppliers to confirm the protection level of the safety system. A Knowledge Discovery in Databases (KDD) process is proposed in order to minimize the number of tests. The KDD process is based on the data emerging from sled tests according to Euro NCAP specifications. About 30 parameters of the passive safety systems from different data sources (crash data, dummy protocol) are first analysed together with experts opinions. A procedure is proposed to manage missing data and validated on real data sets. Finally, a procedure is developed to estimate a set of rough initial parameters of the passive system before testing aiming at reducing the number of tests.Keywords: KDD process, passive safety systems, sled test, dummy injury assessment reference values, frontal impact
Procedia PDF Downloads 3828487 Membrane Distillation Process Modeling: Dynamical Approach
Authors: Fadi Eleiwi, Taous Meriem Laleg-Kirati
Abstract:
This paper presents a complete dynamic modeling of a membrane distillation process. The model contains two consistent dynamic models. A 2D advection-diffusion equation for modeling the whole process and a modified heat equation for modeling the membrane itself. The complete model describes the temperature diffusion phenomenon across the feed, membrane, permeate containers and boundary layers of the membrane. It gives an online and complete temperature profile for each point in the domain. It explains heat conduction and convection mechanisms that take place inside the process in terms of mathematical parameters, and justify process behavior during transient and steady state phases. The process is monitored for any sudden change in the performance at any instance of time. In addition, it assists maintaining production rates as desired, and gives recommendations during membrane fabrication stages. System performance and parameters can be optimized and controlled using this complete dynamic model. Evolution of membrane boundary temperature with time, vapor mass transfer along the process, and temperature difference between membrane boundary layers are depicted and included. Simulations were performed over the complete model with real membrane specifications. The plots show consistency between 2D advection-diffusion model and the expected behavior of the systems as well as literature. Evolution of heat inside the membrane starting from transient response till reaching steady state response for fixed and varying times is illustrated.Keywords: membrane distillation, dynamical modeling, advection-diffusion equation, thermal equilibrium, heat equation
Procedia PDF Downloads 2728486 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets
Authors: Ece Cigdem Mutlu, Burak Alakent
Abstract:
Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.Keywords: average run length, M-estimators, quality control, robust estimators
Procedia PDF Downloads 1908485 Design and Performance Evaluation of Plasma Spouted Bed Reactor for Converting Waste Plastic into Green Hydrogen
Authors: Palash Kumar Mollick, Leire Olazar, Laura Santamaria, Pablo Comendador, Gartzen Lopez, Martin Olazar
Abstract:
Average calorific value of a mixure of waste plastic is approximately 38 MJ/kg. Present work aims to extract maximum possible energy from a mixure of waste plastic using a DC thermal plasma in a spouted bed reactor. Plasma pyrolysis and steam reforming process has shown a potential to generate hydrogen from plastic with much below of legal limit of producing dioxins and furans as the carcinogenic gases. A spouted bed pyrolysis rector can continuously process plastic beads to produce organic volatiles, which later react with steam in presence of catalyst to results in syngas. lasma being the fourth state of matter, can carry high impact electrons to favour the activation energy of any chemical reactions. Computational Fluid Dynamic (CFD) simulation using COMSOL Multiphysics software has been performed to evaluate performance of a plasma spouted bed reactor in producing contamination free hydrogen as a green energy from waste plastic beads. The simulation results will showcase a design of a plasma spouted bed reactor for converting plastic waste into green hydrogen in a single step process. The high temperature hydrodynamics of spouted bed with plastic beads and the corresponding temperature distribution inside the reaction chamber will be critically examined for it’s near future installation of demonstration plant.Keywords: green hydrogen, plastic waste, synthetic gas, pyrolysis, steam reforming, spouted bed, reactor design, plasma, dc palsma, cfd simulation
Procedia PDF Downloads 1148484 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.Keywords: deep learning, artificial neural networks, energy price forecasting, turkey
Procedia PDF Downloads 2928483 Inducing Flow Experience in Mobile Learning: An Experiment Using a Spanish Learning Mobile Application
Authors: S. Jonsson, D. Millard, C. Bokhove
Abstract:
Smartphones are ubiquitous and frequently used as learning tools, which makes the design of educational apps an important area of research. A key issue is designing apps to encourage engagement while maintaining a focus on the educational aspects of the app. Flow experience is a promising method for addressing this issue, which refers to a mental state of cognitive absorption and positive emotion. Flow experience has been shown to be associated with positive emotion and increased learning performance. Studies have shown that immediate feedback is an antecedent to Flow. This experiment investigates the effect of immediate feedback on Flow experience. An app teaching Spanish phrases was developed, and 30 participants completed both a 10min session with immediate feedback and a 10min session with delayed feedback. The app contained a task where the user assembles Spanish phrases by pressing bricks with Spanish words. Immediate feedback was implemented by incorrect bricks recoiling, while correct brick moved to form part of the finished phrase. In the delayed feedback condition, the user did not know if the bricks they pressed were correct until the phrase was complete. The level of Flow experienced by the participants was measured after each session using the Flow Short Scale. The results showed that higher levels of Flow were experienced in the immediate feedback session. It was also found that 14 of the participants indicated that the demands of the task were ‘just right’ in the immediate feedback session, while only one did in the delayed feedback session. These results have implications for how to design educational technology and opens up questions for how Flow experience can be used to increase performance and engagement.Keywords: feedback timing, flow experience, L2 language learning, mobile learning
Procedia PDF Downloads 1338482 A Validated High-Performance Liquid Chromatography-UV Method for Determination of Malondialdehyde-Application to Study in Chronic Ciprofloxacin Treated Rats
Authors: Anil P. Dewani, Ravindra L. Bakal, Anil V. Chandewar
Abstract:
Present work demonstrates the applicability of high-performance liquid chromatography (HPLC) with UV detection for the determination of malondialdehyde as malondialdehyde-thiobarbituric acid complex (MDA-TBA) in-vivo in rats. The HPLC-UV method for MDA-TBA was achieved by isocratic mode on a reverse-phase C18 column (250mm×4.6mm) at a flow rate of 1.0mLmin−1 followed by UV detection at 278 nm. The chromatographic conditions were optimized by varying the concentration and pH followed by changes in percentage of organic phase optimal mobile phase consisted of mixture of water (0.2% Triethylamine pH adjusted to 2.3 by ortho-phosphoric acid) and acetonitrile in ratio (80:20 % v/v). The retention time of MDA-TBA complex was 3.7 min. The developed method was sensitive as limit of detection and quantification (LOD and LOQ) for MDA-TBA complex were (standard deviation and slope of calibration curve) 110 ng/ml and 363 ng/ml respectively. The method was linear for MDA spiked in plasma and subjected to derivatization at concentrations ranging from 100 to 1000 ng/ml. The precision of developed method measured in terms of relative standard deviations for intra-day and inter-day studies was 1.6–5.0% and 1.9–3.6% respectively. The HPLC method was applied for monitoring MDA levels in rats subjected to chronic treatment of ciprofloxacin (CFL) (5mg/kg/day) for 21 days. Results were compared by findings in control group rats. Mean peak areas of both study groups was subjected for statistical treatment to unpaired student t-test to find p-values. The p value was < 0.001 indicating significant results and suggesting increased MDA levels in rats subjected to chronic treatment of CFL of 21 days.Keywords: MDA, TBA, ciprofloxacin, HPLC-UV
Procedia PDF Downloads 3258481 Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments
Authors: Lavanya Ratnabala, Robinroy Peter, E. Y. A. Charles
Abstract:
This research paper addresses the challenges of exploration and navigation in unknown environments from an evolutionary swarm robotics perspective. Path formation plays a crucial role in enabling cooperative swarm robots to accomplish these tasks. The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals. Simulation experiments conducted in the Argos simulator demonstrate the successful formation of paths in the majority of trials. Furthermore, the paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method. To mitigate this issue, a task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication. The strategy evaluates the distance between points and determines the required number of robots for the path formation task, reducing unwanted exploration and traffic congestion. The performance of the sub-goal-based path formation and task allocation strategy is evaluated by comparing path length, time, and resource reduction against the A* algorithm. The simulation experiments demonstrate promising results, showcasing the scalability, robustness, and fault tolerance characteristics of the proposed approach.Keywords: swarm, path formation, task allocation, Argos, exploration, navigation, sub-goal
Procedia PDF Downloads 428480 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images
Authors: Eiman Kattan, Hong Wei
Abstract:
In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.Keywords: CNNs, hyperparamters, remote sensing, land cover, land use
Procedia PDF Downloads 1698479 Investigation of the Mechanical Performance of Hot Mix Asphalt Modified with Crushed Waste Glass
Authors: Ayman Othman, Tallat Ali
Abstract:
The successive increase of generated waste materials like glass has led to many environmental problems. Using crushed waste glass in hot mix asphalt paving has been though as an alternative to landfill disposal and recycling. This paper discusses the possibility of utilizing crushed waste glass, as a part of fine aggregate in hot mix asphalt in Egypt. This is done through evaluation of the mechanical properties of asphalt concrete mixtures mixed with waste glass and determining the appropriate glass content that can be adapted in asphalt pavement. Four asphalt concrete mixtures with various glass contents, namely; 0%, 4%, 8% and 12% by weight of total mixture were studied. Evaluation of the mechanical properties includes performing Marshall stability, indirect tensile strength, fracture energy and unconfined compressive strength tests. Laboratory testing had revealed the enhancement in both compressive strength and Marshall stability test parameters when the crushed glass was added to asphalt concrete mixtures. This enhancement was accompanied with a very slight reduction in both indirect tensile strength and fracture energy when glass content up to 8% was used. Adding more than 8% of glass causes a sharp reduction in both indirect tensile strength and fracture energy. Testing results had also shown a reduction in the optimum asphalt content when the waste glass was used. Measurements of the heat loss rate of asphalt concrete mixtures mixed with glass revealed their ability to hold heat longer than conventional mixtures. This can have useful application in asphalt paving during cold whether or when a long period of post-mix transportation is needed.Keywords: waste glass, hot mix asphalt, mechanical performance, indirect tensile strength, fracture energy, compressive strength
Procedia PDF Downloads 3108478 Knowledge Transfer through Entrepreneurship: From Research at the University to the Consolidation of a Spin-off Company
Authors: Milica Lilic, Marina Rosales Martínez
Abstract:
Academic research cannot be oblivious to social problems and needs, so projects that have the capacity for transformation and impact should have the opportunity to go beyond the University circles and bring benefit to society. Apart from patents and R&D research contracts, this opportunity can be achieved through entrepreneurship as one of the most direct tools to turn knowledge into a tangible product. Thus, as an example of good practices, it is intended to analyze the case of an institutional entrepreneurship program carried out at the University of Seville, aimed at researchers interested in assessing the business opportunity of their research and expanding their knowledge on procedures for the commercialization of technologies used at academic projects. The program is based on three pillars: training, teamwork sessions and networking. The training includes aspects such as product-client fit, technical-scientific and economic-financial feasibility of a spin-off, institutional organization and decision making, public and private fundraising, and making the spin-off visible in the business world (social networks, key contacts, corporate image and ethical principles). On the other hand, the teamwork sessions are guided by a mentor and aimed at identifying research results with potential, clarifying financial needs and procedures to obtain the necessary resources for the consolidation of the spin-off. This part of the program is considered to be crucial in order for the participants to convert their academic findings into a business model. Finally, the networking part is oriented to workshops about the digital transformation of a project, the accurate communication of the product or service a spin-off offers to society and the development of transferable skills necessary for managing a business. This blended program results in the final stage where each team, through an elevator pitch format, presents their research turned into a business model to an experienced jury. The awarded teams get a starting capital for their enterprise and enjoy the opportunity of formally consolidating their spin-off company at the University. Studying the results of the program, it has been shown that many researchers have basic or no knowledge of entrepreneurship skills and different ways to turn their research results into a business model with a direct impact on society. Therefore, the described program has been used as an example to highlight the importance of knowledge transfer at the University and the role that this institution should have in providing the tools to promote entrepreneurship within it. Keeping in mind that the University is defined by three main activities (teaching, research and knowledge transfer), it is safe to conclude that the latter, and the entrepreneurship as an expression of it, is crucial in order for the other two to comply with their purpose.Keywords: good practice, knowledge transfer, a spin-off company, university
Procedia PDF Downloads 1468477 Heat Transfer Performance of a Small Cold Plate with Uni-Directional Porous Copper for Cooling Power Electronics
Authors: K. Yuki, R. Tsuji, K. Takai, S. Aramaki, R. Kibushi, N. Unno, K. Suzuki
Abstract:
A small cold plate with uni-directional porous copper is proposed for cooling power electronics such as an on-vehicle inverter with the heat generation of approximately 500 W/cm2. The uni-directional porous copper with the pore perpendicularly orienting the heat transfer surface is soldered to a grooved heat transfer surface. This structure enables the cooling liquid to evaporate in the pore of the porous copper and then the vapor to discharge through the grooves. In order to minimize the cold plate, a double flow channel concept is introduced for the design of the cold plate. The cold plate consists of a base plate, a spacer, and a vapor discharging plate, totally 12 mm in thickness. The base plate has multiple nozzles of 1.0 mm in diameter for the liquid supply and 4 slits of 2.0 mm in width for vapor discharging, and is attached onto the top surface of the porous copper plate of 20 mm in diameter and 5.0 mm in thickness. The pore size is 0.36 mm and the porosity is 36 %. The cooling liquid flows into the porous copper as an impinging jet flow from the multiple nozzles, and then the vapor, which is generated in the pore, is discharged through the grooves and the vapor slits outside the cold plate. A heated test section consists of the cold plate, which was explained above, and a heat transfer copper block with 6 cartridge heaters. The cross section of the heat transfer block is reduced in order to increase the heat flux. The top surface of the block is the grooved heat transfer surface of 10 mm in diameter at which the porous copper is soldered. The grooves are fabricated like latticework, and the width and depth are 1.0 mm and 0.5 mm, respectively. By embedding three thermocouples in the cylindrical part of the heat transfer block, the temperature of the heat transfer surface ant the heat flux are extrapolated in a steady state. In this experiment, the flow rate is 0.5 L/min and the flow velocity at each nozzle is 0.27 m/s. The liquid inlet temperature is 60 °C. The experimental results prove that, in a single-phase heat transfer regime, the heat transfer performance of the cold plate with the uni-directional porous copper is 2.1 times higher than that without the porous copper, though the pressure loss with the porous copper also becomes higher than that without the porous copper. As to the two-phase heat transfer regime, the critical heat flux increases by approximately 35% by introducing the uni-directional porous copper, compared with the CHF of the multiple impinging jet flow. In addition, we confirmed that these heat transfer data was much higher than that of the ordinary single impinging jet flow. These heat transfer data prove high potential of the cold plate with the uni-directional porous copper from the view point of not only the heat transfer performance but also energy saving.Keywords: cooling, cold plate, uni-porous media, heat transfer
Procedia PDF Downloads 2958476 Energy Efficiency and Sustainability Analytics for Reducing Carbon Emissions in Oil Refineries
Authors: Gaurav Kumar Sinha
Abstract:
The oil refining industry, significant in its energy consumption and carbon emissions, faces increasing pressure to reduce its environmental footprint. This article explores the application of energy efficiency and sustainability analytics as crucial tools for reducing carbon emissions in oil refineries. Through a comprehensive review of current practices and technologies, this study highlights innovative analytical approaches that can significantly enhance energy efficiency. We focus on the integration of advanced data analytics, including machine learning and predictive modeling, to optimize process controls and energy use. These technologies are examined for their potential to not only lower energy consumption but also reduce greenhouse gas emissions. Additionally, the article discusses the implementation of sustainability analytics to monitor and improve environmental performance across various operational facets of oil refineries. We explore case studies where predictive analytics have successfully identified opportunities for reducing energy use and emissions, providing a template for industry-wide application. The challenges associated with deploying these analytics, such as data integration and the need for skilled personnel, are also addressed. The paper concludes with strategic recommendations for oil refineries aiming to enhance their sustainability practices through the adoption of targeted analytics. By implementing these measures, refineries can achieve significant reductions in carbon emissions, aligning with global environmental goals and regulatory requirements.Keywords: energy efficiency, sustainability analytics, carbon emissions, oil refineries, data analytics, machine learning, predictive modeling, process optimization, greenhouse gas reduction, environmental performance
Procedia PDF Downloads 318475 Flow Field Optimization for Proton Exchange Membrane Fuel Cells
Authors: Xiao-Dong Wang, Wei-Mon Yan
Abstract:
The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection
Procedia PDF Downloads 2968474 Efficiency and Scale Elasticity in Network Data Envelopment Analysis: An Application to International Tourist Hotels in Taiwan
Authors: Li-Hsueh Chen
Abstract:
Efficient operation is more and more important for managers of hotels. Unlike the manufacturing industry, hotels cannot store their products. In addition, many hotels provide room service, and food and beverage service simultaneously. When efficiencies of hotels are evaluated, the internal structure should be considered. Hence, based on the operational characteristics of hotels, this study proposes a DEA model to simultaneously assess the efficiencies among the room production division, food and beverage production division, room service division and food and beverage service division. However, not only the enhancement of efficiency but also the adjustment of scale can improve the performance. In terms of the adjustment of scale, scale elasticity or returns to scale can help to managers to make decisions concerning expansion or contraction. In order to construct a reasonable approach to measure the efficiencies and scale elasticities of hotels, this study builds an alternative variable-returns-to-scale-based two-stage network DEA model with the combination of parallel and series structures to explore the scale elasticities of the whole system, room production division, food and beverage production division, room service division and food and beverage service division based on the data of international tourist hotel industry in Taiwan. The results may provide valuable information on operational performance and scale for managers and decision makers.Keywords: efficiency, scale elasticity, network data envelopment analysis, international tourist hotel
Procedia PDF Downloads 2258473 Unsteady Flow Simulations for Microchannel Design and Its Fabrication for Nanoparticle Synthesis
Authors: Mrinalini Amritkar, Disha Patil, Swapna Kulkarni, Sukratu Barve, Suresh Gosavi
Abstract:
Micro-mixers play an important role in the lab-on-a-chip applications and micro total analysis systems to acquire the correct level of mixing for any given process. The mixing process can be classified as active or passive according to the use of external energy. Literature of microfluidics reports that most of the work is done on the models of steady laminar flow; however, the study of unsteady laminar flow is an active area of research at present. There are wide applications of this, out of which, we consider nanoparticle synthesis in micro-mixers. In this work, we have developed a model for unsteady flow to study the mixing performance of a passive micro mixer for reactants used for such synthesis. The model is developed in Finite Volume Method (FVM)-based software, OpenFOAM. The model is tested by carrying out the simulations at Re of 0.5. Mixing performance of the micro-mixer is investigated using simulated concentration values of mixed species across the width of the micro-mixer and calculating the variance across a line profile. Experimental validation is done by passing dyes through a Y shape micro-mixer fabricated using polydimethylsiloxane (PDMS) polymer and comparing variances with the simulated ones. Gold nanoparticles are later synthesized through the micro-mixer and collected at two different times leading to significantly different size distributions. These times match with the time scales over which reactant concentrations vary as obtained from simulations. Our simulations could thus be used to create design aids for passive micro-mixers used in nanoparticle synthesis.Keywords: Lab-on-chip, LOC, micro-mixer, OpenFOAM, PDMS
Procedia PDF Downloads 1618472 Bi-Liquid Free Surface Flow Simulation of Liquid Atomization for Bi-Propellant Thrusters
Authors: Junya Kouwa, Shinsuke Matsuno, Chihiro Inoue, Takehiro Himeno, Toshinori Watanabe
Abstract:
Bi-propellant thrusters use impinging jet atomization to atomize liquid fuel and oxidizer. Atomized propellants are mixed and combusted due to auto-ignitions. Therefore, it is important for a prediction of thruster’s performance to simulate the primary atomization phenomenon; especially, the local mixture ratio can be used as indicator of thrust performance, so it is useful to evaluate it from numerical simulations. In this research, we propose a numerical method for considering bi-liquid and the mixture and install it to CIP-LSM which is a two-phase flow simulation solver with level-set and MARS method as an interfacial tracking method and can predict local mixture ratio distribution downstream from an impingement point. A new parameter, beta, which is defined as the volume fraction of one liquid in the mixed liquid within a cell is introduced and the solver calculates the advection of beta, inflow and outflow flux of beta to a cell. By validating this solver, we conducted a simple experiment and the same simulation by using the solver. From the result, the solver can predict the penetrating length of a liquid jet correctly and it is confirmed that the solver can simulate the mixing of liquids. Then we apply this solver to the numerical simulation of impinging jet atomization. From the result, the inclination angle of fan after the impingement in the bi-liquid condition reasonably agrees with the theoretical value. Also, it is seen that the mixture of liquids can be simulated in this result. Furthermore, simulation results clarify that the injecting condition affects the atomization process and local mixture ratio distribution downstream drastically.Keywords: bi-propellant thrusters, CIP-LSM, free-surface flow simulation, impinging jet atomization
Procedia PDF Downloads 2798471 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 1598470 Institutional Cooperation to Foster Economic Development: Universities and Social Enterprises
Authors: Khrystyna Pavlyk
Abstract:
In the OECD countries, percentage of adults with higher education degrees has increased by 10 % during 2000-2010. Continuously increasing demand for higher education gives universities a chance of becoming key players in socio-economic development of a territory (region or city) via knowledge creation, knowledge transfer, and knowledge spillovers. During previous decade, universities have tried to support spin-offs and start-ups, introduced courses on sustainability and corporate social responsibility. While much has been done, new trends are starting to emerge in search of better approaches. Recently a number of universities created centers that conduct research in a field social entrepreneurship, which in turn underpin educational programs run at these universities. The list includes but is not limited to the Centre for Social Economy at University of Liège, Institute for Social Innovation at ESADE, Skoll Centre for Social Entrepreneurship at Oxford, Centre for Social Entrepreneurship at Rosklide, Social Entrepreneurship Initiative at INSEAD. Existing literature already examined social entrepreneurship centers in terms of position in the institutional structure, initial and additional funding, teaching initiatives, research achievements, and outreach activities. At the same time, Universities can become social enterprises themselves. Previous research revealed that universities use both business and social entrepreneurship models. Universities which are mainly driven by a social mission are more likely to transform into social entrepreneurial institutions. At the same time, currently, there is no clear understanding of what social entrepreneurship in higher education is about and thus social entrepreneurship in higher education needs to be studied and promoted at the same time. Main roles which socially oriented university can play in city development include: buyer (implementation of socially focused local procurement programs creates partnerships focused on local sustainable growth.); seller (centers created by universities can sell socially oriented goods and services, e.g. in consultancy.); employer (Universities can employ socially vulnerable groups.); business incubator (which will help current student to start their social enterprises). In the paper, we will analyze these in more detail. We will also examine a number of indicators that can be used to assess the impact, both direct and indirect, that universities can have on city's economy. At the same time, originality of this paper mainly lies not in methodological approaches used, but in countries evaluated. Social entrepreneurship is still treated as a relatively new phenomenon in post-transitional countries where social services were provided only by the state for many decades. Paper will provide data and example’s both from developed countries (the US and EU), and those located in CIS and CEE region.Keywords: social enterprise, university, regional economic development, comparative study
Procedia PDF Downloads 2548469 Count of Trees in East Africa with Deep Learning
Authors: Nubwimana Rachel, Mugabowindekwe Maurice
Abstract:
Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization
Procedia PDF Downloads 728468 Evidence on the Nature and Extent of Fall in Oil Prices on the Financial Performance of Listed Companies: A Ratio Analysis Case Study of the Insurance Sector in the UAE
Authors: Pallavi Kishore, Mariam Aslam
Abstract:
The sharp decline in oil prices that started in 2014 affected most economies in the world either positively or negatively. In some economies, particularly the oil exporting countries, the effects were felt immediately. The Gulf Cooperation Council’s (GCC henceforth) countries are oil and gas-dependent with the largest oil reserves in the world. UAE (United Arab Emirates) has been striving to diversify away from oil and expects higher non-oil growth in 2018. These two factors, falling oil prices and the economy strategizing away from oil dependence, make a compelling case to study the financial performance of various sectors in the economy. Among other sectors, the insurance sector is widely recognized as an important indicator of the health of the economy. An expanding population, surge in construction and infrastructure, increased life expectancy, greater expenditure on automobiles and other luxury goods translate to a booming insurance sector. A slow-down of the insurance sector, on the other hand, may indicate a general slow-down in the economy. Therefore, a study on the insurance sector will help understand the general nature of the current economy. This study involves calculations and comparisons of ratios pre and post the fall in oil prices in the insurance sector in the UAE. A sample of 33 companies listed on the official stock exchanges of UAE-Dubai Financial Market and Abu Dhabi Stock Exchange were collected and empirical analysis employed to study the financial performance pre and post fall in oil prices. Ratios were calculated in 5 categories: Profitability, Liquidity, Leverage, Efficiency, and Investment. The means pre- and post-fall are compared to conclude that the profitability ratios including ROSF (Return on Shareholder Funds), ROCE (Return on Capital Employed) and NPM (Net Profit Margin) have all taken a hit. Parametric tests, including paired t-test, concludes that while the fall in profitability ratios is statistically significant, the other ratios have been quite stable in the period. The efficiency, liquidity, gearing and investment ratios have not been severely affected by the fall in oil prices. This may be due to the implementation of stronger regulatory policies and is a testimony to the diversification into the non-oil economy. The regulatory authorities can use the findings of this study to ensure transparency in revealing financial information to the public and employ policies that will help further the health of the economy. The study will also help understand which areas within the sector could benefit from more regulations.Keywords: UAE, insurance sector, ratio analysis, oil price, profitability, liquidity, gearing, investment, efficiency
Procedia PDF Downloads 245