Search results for: direct steam generation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6814

Search results for: direct steam generation

364 Comparative Assessment of Rainwater Management Alternatives for Dhaka City: Case Study of North South University

Authors: S. M. Islam, Wasi Uddin, Nazmun Nahar

Abstract:

Dhaka, the capital of Bangladesh, faces two contrasting problems; excess of water during monsoon season and scarcity of water during dry season. The first problem occurs due to rapid urbanization and mismanagement of rainwater whereas the second problem is related to climate change and increasing urban population. Inadequate drainage system also worsens the overall water management scenario in Dhaka city. Dhaka has a population density of 115,000 people per square miles. This results in a 2.5 billion liter water demand every day, 87% of which is fulfilled by groundwater. Over dependency on groundwater has resulted in more than 200 feet drop in the last 50 years and continues to decline at a rate of 9 feet per year. Considering the gravity of the problem, it is high time that practitioners, academicians and policymakers consider different water management practices and look into their cumulative impacts at different scales. The present study assesses different rainwater management options for North South University of Bangladesh and recommends the most feasible and sustainable rainwater management measure. North South University currently accommodates over 20,000 students, faculty members, and administrative staffs. To fulfill the water demand, there are two deep tube wells, which bring up approximately 150,000 liter of water every hour. The annual water demand is approximately 103 million liters. Dhaka receives approximately 1800 mm of rainfall every year. For the current study, two academic buildings and one administrative building consist of 4924 square meters of rooftop area was selected as catchment area. Both rainwater harvesting and groundwater recharge options were analyzed separately. It was estimated that by rainwater harvesting, annually a total of 7.2 million liters of water can be reused which is approximately 7% of the total annual water usage. In the monsoon, rainwater harvesting fulfills 12.2% of the monthly water demand. The approximate cost of the rainwater harvesting system is estimated to be 940975 bdt (USD 11500). For direct groundwater recharge, a system comprises of one de-siltation tank, two recharge tanks and one siltation tank were designed that requires approximately 532788 bdt (USD 6500). The payback period is approximately 7 years and 4 months for the groundwater recharge system whereas the payback period for rainwater harvesting option is approximately 12 years and 4 months. Based on the cost-benefit analysis, the present study finds the groundwater recharge system to be most suitable for North South University. The present study also demonstrates that if only one institution like North South University can add up a substantial amount of water to the aquifer, bringing other institutions in the network has the potential to create significant cumulative impact on replenishing the declining groundwater level of Dhaka city. As an additional benefit, it also prevents large amount of water being discharged into the storm sewers which results in severe flooding in Dhaka city during monsoon.

Keywords: Dhaka, groundwater, harvesting, rainwater, recharge

Procedia PDF Downloads 124
363 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89

Authors: A. Chatel, I. S. Torreguitart, T. Verstraete

Abstract:

The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.

Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness

Procedia PDF Downloads 110
362 Anticancer Potentials of Aqueous Tinospora cordifolia and Its Bioactive Polysaccharide, Arabinogalactan on Benzo(a)Pyrene Induced Pulmonary Tumorigenesis: A Study with Relevance to Blood Based Biomarkers

Authors: Vandana Mohan, Ashwani Koul

Abstract:

Aim: To evaluate the potential of Aqueous Tinospora cordifolia stem extract (Aq.Tc) and Arabinogalactan (AG) on pulmonary carcinogenesis and associated tumor markers. Background: Lung cancer is one of the most frequent malignancy with high mortality rate due to limitation of early detection resulting in low cure rates. Current research effort focuses on identifying some blood-based biomarkers like CEA, ctDNA and LDH which may have potential to detect cancer at an early stage, evaluation of therapeutic response and its recurrence. Medicinal plants and their active components have been widely investigated for their anticancer potentials. Aqueous preparation of T. Cordifolia extract is enriched in the polysaccharide fraction i.e., AG when compared with other types of extract. Moreover, reports are available of polysaccharide fraction of T. Cordifolia in in vitro lung cancer models which showed profound anti-metastatic activity against these cell lines. However, not much has been explored about its effect in in vivo lung cancer models and the underlying mechanism involved. Experimental Design: Mice were randomly segregated into six groups. Group I animals served as control. Group II animals were administered with Aq. Tc extract (200 mg/kg b.w.) p.o.on the alternate days. Group III animals were fed with AG (7.5 mg/kg b.w.) p.o. on the alternate days (thrice a week). Group IV animals were installed with Benzo(a)pyrene (50 mg/kg b.w.), i.p. twice within an interval of two weeks. Group V animals received Aq. Tc extract as in group II along with it B(a)P was installed after two weeks of Aq. Tc administration following the same protocol as for group IV. Group VI animals received AG as in group III along with it B(a)P was installed after two weeks of AG administration. Results: Administration of B(a)P to mice resulted in increased tumor incidence, multiplicity and pulmonary somatic index with concomitant increase in serum/plasma markers like CEA, ctDNA, LDH and TNF-α.Aq.Tc and AG supplementation significantly attenuated these alterations at different stages of tumorigenesis thereby showing potent anti-cancer effect in lung cancer. A pronounced decrease in serum/plasma markers were observed in animals treated with Aq.Tc as compared to those fed with AG. Also, extensive hyperproliferation of alveolar epithelium was prominent in B(a)P induced lung tumors. However, treatment of Aq.Tc and AG to lung tumor bearing mice exhibited reduced alveolar damage evident from decreased number of hyperchromatic irregular nuclei. A direct correlation between the concentration of tumor markers and the intensity of lung cancer was observed in animals bearing cancer co-treated with Aq.Tc and AG. Conclusion: These findings substantiate the chemopreventive potential of Aq.Tc and AG against lung tumorigenesis. Interestingly, Aq.Tc was found to be more effective in modulating the cancer as reflected by various observations which may be attributed to the synergism offered by various components of Aq.Tc. Further studies are in progress to understand the underlined mechanism in inhibiting lung tumorigenesis by Aq.Tc and AG.

Keywords: Arabinogalactan, Benzo(a)pyrene B(a)P, carcinoembryonic antigen (CEA), circulating tumor DNA (ctDNA), lactate dehydrogenase (LDH), Tinospora cordifolia

Procedia PDF Downloads 185
361 Achieving Sustainable Agriculture with Treated Municipal Wastewater

Authors: Reshu Yadav, Himanshu Joshi, S. K. Tripathi

Abstract:

Fresh water is a scarce resource which is essential for humans and ecosystems, but its distribution is uneven. Agricultural production accounts for 70% of all surface water supplies. It is projected that against the expansion in the area equipped for irrigation by 0.6% per year, the global potential irrigation water demand would rise by 9.5% during 2021-25. This would, on one hand, have to compete against the sharply rising urban water demand. On the other, it would also have to face the fear of climate change, as temperatures rise and crop yields could drop from 10-30% in many large areas. The huge demand for irrigation combined with fresh water scarcity encourages to explore the reuse of wastewater as a resource. However, the use of such wastewater is often linked to the safety issues when used non judiciously or with poor safeguards while irrigating food crops. Paddy is one of the major crops globally and amongst the most important in South Asia and Africa. In many parts of the world, use of municipal wastewater has been promoted as a viable option in this regard. In developing and fast growing countries like India, regularly increasing wastewater generation rates may allow this option to be considered quite seriously. In view of this, a pilot field study was conducted at the Jagjeetpur Municipal Sewage treatment plant situated in the Haridwar town of Uttarakhand state, India. The objectives of the present study were to study the effect of treated wastewater on the production of various paddy varieties (Sharbati, PR-114, PB-1, Menaka, PB1121 and PB 1509) and emission of GHG gases (CO2, CH4 and N2O) as compared to the same varieties grown in the control plots irrigated with fresh water. Of late, the concept of water footprint assessment has emerged, which explains enumeration of various types of water footprints of an agricultural entity from its production to processing stages. Paddy, the most water demanding staple crop of Uttarakhand state, displayed a high green water footprint value of 2966.538 m3/ton. Most of the wastewater irrigated varieties displayed upto 6% increase in production, except Menaka and PB-1121, which showed a reduction in production (6% and 3% respectively), due to pest and insect infestation. The treated wastewater was observed to be rich in Nitrogen (55.94 mg/ml Nitrate), Phosphorus (54.24 mg/ml) and Potassium (9.78 mg/ml), thus rejuvenating the soil quality and not requiring any external nutritional supplements. Percentage increase of GHG gases on irrigation with treated municipal waste water as compared to control plots was observed as 0.4% - 8.6% (CH4), 1.1% - 9.2% (CO2), and 0.07% - 5.8% (N2O). The variety, Sharbati, displayed maximum production (5.5 ton/ha) and emerged as the most resistant variety against pests and insects. The emission values of CH4 ,CO2 and N2O were 729.31 mg/m2/d, 322.10 mg/m2/d and 400.21 mg/m2/d in water stagnant condition. This study highlighted a successful possibility of reuse of wastewater for non-potable purposes offering the potential for exploiting this resource that can replace or reduce existing use of fresh water sources in agricultural sector.

Keywords: greenhouse gases, nutrients, water footprint, wastewater irrigation

Procedia PDF Downloads 321
360 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor

Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar

Abstract:

Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.

Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration

Procedia PDF Downloads 191
359 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration

Authors: S. J. Addinell, T. Richard, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.

Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis

Procedia PDF Downloads 230
358 Edmonton Urban Growth Model as a Support Tool for the City Plan Growth Scenarios Development

Authors: Sinisa J. Vukicevic

Abstract:

Edmonton is currently one of the youngest North American cities and has achieved significant growth over the past 40 years. Strong urban shift requires a new approach to how the city is envisioned, planned, and built. This approach is evidence-based scenario development, and an urban growth model was a key support tool in framing Edmonton development strategies, developing urban policies, and assessing policy implications. The urban growth model has been developed using the Metronamica software platform. The Metronamica land use model evaluated the dynamic of land use change under the influence of key development drivers (population and employment), zoning, land suitability, and land and activity accessibility. The model was designed following the Big City Moves ideas: become greener as we grow, develop a rebuildable city, ignite a community of communities, foster a healing city, and create a city of convergence. The Big City Moves were converted to three development scenarios: ‘Strong Central City’, ‘Node City’, and ‘Corridor City’. Each scenario has a narrative story that expressed scenario’s high level goal, scenario’s approach to residential and commercial activities, to transportation vision, and employment and environmental principles. Land use demand was calculated for each scenario according to specific density targets. Spatial policies were analyzed according to their level of importance within the policy set definition for the specific scenario, but also through the policy measures. The model was calibrated on the way to reproduce known historical land use pattern. For the calibration, we used 2006 and 2011 land use data. The validation is done independently, which means we used the data we did not use for the calibration. The model was validated with 2016 data. In general, the modeling process contain three main phases: ‘from qualitative storyline to quantitative modelling’, ‘model development and model run’, and ‘from quantitative modelling to qualitative storyline’. The model also incorporates five spatial indicators: distance from residential to work, distance from residential to recreation, distance to river valley, urban expansion and habitat fragmentation. The major finding of this research could be looked at from two perspectives: the planning perspective and technology perspective. The planning perspective evaluates the model as a tool for scenario development. Using the model, we explored the land use dynamic that is influenced by a different set of policies. The model enables a direct comparison between the three scenarios. We explored the similarities and differences of scenarios and their quantitative indicators: land use change, population change (and spatial allocation), job allocation, density (population, employment, and dwelling unit), habitat connectivity, proximity to objects of interest, etc. From the technology perspective, the model showed one very important characteristic: the model flexibility. The direction for policy testing changed many times during the consultation process and model flexibility in applying all these changes was highly appreciated. The model satisfied our needs as scenario development and evaluation tool, but also as a communication tool during the consultation process.

Keywords: urban growth model, scenario development, spatial indicators, Metronamica

Procedia PDF Downloads 95
357 The Origins of Representations: Cognitive and Brain Development

Authors: Athanasios Raftopoulos

Abstract:

In this paper, an attempt is made to explain the evolution or development of human’s representational arsenal from its humble beginnings to its modern abstract symbols. Representations are physical entities that represent something else. To represent a thing (in a general sense of “thing”) means to use in the mind or in an external medium a sign that stands for it. The sign can be used as a proxy of the represented thing when the thing is absent. Representations come in many varieties, from signs that perceptually resemble their representative to abstract symbols that are related to their representata through conventions. Relying the distinction among indices, icons, and symbols, it is explained how symbolic representations gradually emerged from indices and icons. To understand the development or evolution of our representational arsenal, the development of the cognitive capacities that enabled the gradual emergence of representations of increasing complexity and expressive capability should be examined. The examination of these factors should rely on a careful assessment of the available empirical neuroscientific and paleo-anthropological evidence. These pieces of evidence should be synthesized to produce arguments whose conclusions provide clues concerning the developmental process of our representational capabilities. The analysis of the empirical findings in this paper shows that Homo Erectus was able to use both icons and symbols. Icons were used as external representations, while symbols were used in language. The first step in the emergence of representations is that a sensory-motor purely causal schema involved in indices is decoupled from its normal causal sensory-motor functions and serves as a representation of the object that initially called it into play. Sensory-motor schemes are tied to specific contexts of the organism-environment interactions and are activated only within these contexts. For a representation of an object to be possible, this scheme must be de-contextualized so that the same object can be represented in different contexts; a decoupled schema loses its direct ties to reality and becomes mental content. The analysis suggests that symbols emerged due to selection pressures of the social environment. The need to establish and maintain social relationships in ever-enlarging groups that would benefit the group was a sufficient environmental pressure to lead to the appearance of the symbolic capacity. Symbols could serve this need because they can express abstract relationships, such as marriage or monogamy. Icons, by being firmly attached to what can be observed, could not go beyond surface properties to express abstract relations. The cognitive capacities that are required for having iconic and then symbolic representations were present in Homo Erectus, which had a language that started without syntactic rules but was structured so as to mirror the structure of the world. This language became increasingly complex, and grammatical rules started to appear to allow for the construction of more complex expressions required to keep up with the increasing complexity of social niches. This created evolutionary pressures that eventually led to increasing cranial size and restructuring of the brain that allowed more complex representational systems to emerge.

Keywords: mental representations, iconic representations, symbols, human evolution

Procedia PDF Downloads 57
356 Production and Characterization of Biochars from Torrefaction of Biomass

Authors: Serdar Yaman, Hanzade Haykiri-Acma

Abstract:

Biomass is a CO₂-neutral fuel that is renewable and sustainable along with having very huge global potential. Efficient use of biomass in power generation and production of biomass-based biofuels can mitigate the greenhouse gasses (GHG) and reduce dependency on fossil fuels. There are also other beneficial effects of biomass energy use such as employment creation and pollutant reduction. However, most of the biomass materials are not capable of competing with fossil fuels in terms of energy content. High moisture content and high volatile matter yields of biomass make it low calorific fuel, and it is very significant concern over fossil fuels. Besides, the density of biomass is generally low, and it brings difficulty in transportation and storage. These negative aspects of biomass can be overcome by thermal pretreatments that upgrade the fuel property of biomass. That is, torrefaction is such a thermal process in which biomass is heated up to 300ºC under non-oxidizing conditions to avoid burning of the material. The treated biomass is called as biochar that has considerably lower contents of moisture, volatile matter, and oxygen compared to the parent biomass. Accordingly, carbon content and the calorific value of biochar increase to the level which is comparable with that of coal. Moreover, hydrophilic nature of untreated biomass that leads decay in the structure is mostly eliminated, and the surface properties of biochar turn into hydrophobic character upon torrefaction. In order to investigate the effectiveness of torrefaction process on biomass properties, several biomass species such as olive milling residue (OMR), Rhododendron (small shrubby tree with bell-shaped flowers), and ash tree (timber tree) were chosen. The fuel properties of these biomasses were analyzed through proximate and ultimate analyses as well as higher heating value (HHV) determination. For this, samples were first chopped and ground to a particle size lower than 250 µm. Then, samples were subjected to torrefaction in a horizontal tube furnace by heating from ambient up to temperatures of 200, 250, and 300ºC at a heating rate of 10ºC/min. The biochars obtained from this process were also tested by the methods applied to the parent biomass species. Improvement in the fuel properties was interpreted. That is, increasing torrefaction temperature led to regular increases in the HHV in OMR, and the highest HHV (6065 kcal/kg) was gained at 300ºC. Whereas, torrefaction at 250ºC was seen optimum for Rhododendron and ash tree since torrefaction at 300ºC had a detrimental effect on HHV. On the other hand, the increase in carbon contents and reduction in oxygen contents were determined. Burning characteristics of the biochars were also studied using thermal analysis technique. For this purpose, TA Instruments SDT Q600 model thermal analyzer was used and the thermogravimetric analysis (TGA), derivative thermogravimetry (DTG), differential scanning calorimetry (DSC), and differential thermal analysis (DTA) curves were compared and interpreted. It was concluded that torrefaction is an efficient method to upgrade the fuel properties of biomass and the biochars from which have superior characteristics compared to the parent biomasses.

Keywords: biochar, biomass, fuel upgrade, torrefaction

Procedia PDF Downloads 373
355 The Chinese Inland-Coastal Inequality: The Role of Human Capital and the Crisis Watershed

Authors: Iacopo Odoardi, Emanuele Felice, Dario D'Ingiullo

Abstract:

We investigate the role of human capital in the Chinese inland-coastal inequality and how the consequences of the 2007-2008 crisis may induce China to refocus its development path on human capital. We compare panel data analyses for two periods for the richer/coastal and the relatively poor/inland provinces. Considering the rapid evolution of the Chinese economy and the changes forced by the international crisis, we wonder if these events can lead to rethinking local development paths, fostering greater attention on the diffusion of higher education. We expect that the consequences on human capital may, in turn, have consequences on the inland/coastal dualism. The focus on human capital is due to the fact that the growing differences between inland and coastal areas can be explained by the different local endowments. In this respect, human capital may play a major role and should be thoroughly investigated. To assess the extent to which human capital has an effect on economic growth, we consider a fixed-effects model where differences among the provinces are considered parametric shifts in the regression equation. Data refer to the 31 Chinese provinces for the periods 1998-2008 and 2009-2017. Our dependent variable is the annual variation of the provincial gross domestic product (GDP) at the prices of the previous year. Among our regressors, we include two proxies of advanced human capital and other known factors affecting economic development. We are aware of the problem of conceptual endogeneity of variables related to human capital with respect to GDP; we adopt an instrumental variable approach (two-stage least squares) to avoid inconsistent estimates. Our results suggest that the economic strengths that influenced the Chinese take-off and the dualism are confirmed in the first period. These results gain relevance in comparison with the second period. An evolution in local economic endowments is taking place: first, although human capital can have a positive effect on all provinces after the crisis, not all types of advanced education have a direct economic effect; second, the development path of the inland area is changing, with an evolution towards more productive sectors which can favor higher returns to human capital. New strengths (e.g., advanced education, transport infrastructures) could be useful to foster development paths of inland-coastal desirable convergence, especially by favoring the poorer provinces. Our findings suggest that in all provinces, human capital can be useful to promote convergence in growth paths, even if investments in tertiary education seem to have a negative role, most likely due to the inability to exploit the skills of highly educated workers. Furthermore, we observe important changes in the economic characteristics of the less developed internal provinces. These findings suggest an evolution towards more productive economic sectors, a greater ability to exploit both investments in fixed capital and the available infrastructures. All these aspects, if connected with the improvement in the returns to human capital (at least at the secondary level), lead us to assume a better reaction (i.e., resilience) of the less developed provinces to the crisis effects.

Keywords: human capital, inland-coastal inequality, Great Recession, China

Procedia PDF Downloads 205
354 A Strategic Approach in Utilising Limited Resources to Achieve High Organisational Performance

Authors: Collen Tebogo Masilo, Erik Schmikl

Abstract:

The demand for the DataMiner product by customers has presented a great challenge for the vendor in Skyline Communications in deploying its limited resources in the form of human resources, financial resources, and office space, to achieve high organisational performance in all its international operations. The rapid growth of the organisation has been unable to efficiently support its existing customers across the globe, and provide services to new customers, due to the limited number of approximately one hundred employees in its employ. The combined descriptive and explanatory case study research methods were selected as research design, making use of a survey questionnaire which was distributed to a sample of 100 respondents. A sample return of 89 respondents was achieved. The sampling method employed was non-probability sampling, using the convenient sampling method. Frequency analysis and correlation between the subscales (the four themes) were used for statistical analysis to interpret the data. The investigation was conducted into mechanisms that can be deployed to balance the high demand for products and the limited production capacity of the company’s Belgian operations across four aspects: demand management strategies, capacity management strategies, communication methods that can be used to align a sales management department, and reward systems in use to improve employee performance. The conclusions derived from the theme ‘demand management strategies’ are that the company is fully aware of the future market demand for its products. However, there seems to be no evidence that there is proper demand forecasting conducted within the organisation. The conclusions derived from the theme 'capacity management strategies' are that employees always have a lot of work to complete during office hours, and, also, employees seem to need help from colleagues with urgent tasks. This indicates that employees often work on unplanned tasks and multiple projects. Conclusions derived from the theme 'communication methods used to align sales management department with operations' are that communication is not good throughout the organisation. This means that information often stays with management, and does not reach non-management employees. This also means that there is a lack of smooth synergy as expected and a lack of good communication between the sales department and the projects office. This has a direct impact on the delivery of projects to customers by the operations department. The conclusions derived from the theme ‘employee reward systems’ are that employees are motivated, and feel that they add value in their current functions. There are currently no measures in place to identify unhappy employees, and there are also no proper reward systems in place which are linked to a performance management system. The research has made a contribution to the body of research by exploring the impact of the four sub-variables and their interaction on the challenges of organisational productivity, in particular where an organisation experiences a capacity problem during its growth stage during tough economic conditions. Recommendations were made which, if implemented by management, could further enhance the organisation’s sustained competitive operations.

Keywords: high demand for products, high organisational performance, limited production capacity, limited resources

Procedia PDF Downloads 144
353 Importance of Remote Sensing and Information Communication Technology to Improve Climate Resilience in Low Land of Ethiopia

Authors: Hasen Keder Edris, Ryuji Matsunaga, Toshi Yamanaka

Abstract:

The issue of climate change and its impact is a major contemporary global concern. Ethiopia is one of the countries experiencing adverse climate change impact including frequent extreme weather events that are exacerbating drought and water scarcity. Due to this reason, the government of Ethiopia develops a strategic document which focuses on the climate resilience green economy. One of the major components of the strategic framework is designed to improve community adaptation capacity and mitigation of drought. For effective implementation of the strategy, identification of regions relative vulnerability to drought is vital. There is a growing tendency of applying Geographic Information System (GIS) and Remote Sensing technologies for collecting information on duration and severity of drought by direct measure of the topography as well as an indirect measure of land cover. This study aims to show an application of remote sensing technology and GIS for developing drought vulnerability index by taking lowland of Ethiopia as a case study. In addition, it assesses integrated Information Communication Technology (ICT) potential of Ethiopia lowland and proposes integrated solution. Satellite data is used to detect the beginning of the drought. The severity of drought risk prone areas of livestock keeping pastoral is analyzed through normalized difference vegetation index (NDVI) and ten years rainfall data. The change from the existing and average SPOT NDVI and vegetation condition index is used to identify the onset of drought and potential risks. Secondary data is used to analyze geographical coverage of mobile and internet usage in the region. For decades, the government of Ethiopia introduced some technologies and approach to overcoming climate change related problems. However, lack of access to information and inadequate technical support for the pastoral area remains a major challenge. In conventional business as usual approach, the lowland pastorals continue facing a number of challenges. The result indicated that 80% of the region face frequent drought occurrence and out of this 60% of pastoral area faces high drought risk. On the other hand, the target area mobile phone and internet coverage is rapidly growing. One of identified ICT solution enabler technology is telecom center which covers 98% of the region. It was possible to identify the frequently affected area and potential drought risk using the NDVI remote-sensing data analyses. We also found that ICT can play an important role in mitigating climate change challenge. Hence, there is a need to strengthen implementation efforts of climate change adaptation through integrated Remote Sensing and web based information dissemination and mobile alert of extreme events.

Keywords: climate changes, ICT, pastoral, remote sensing

Procedia PDF Downloads 315
352 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation

Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang

Abstract:

Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.

Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation

Procedia PDF Downloads 68
351 ‘Call Before, Save Lives’: Reducing Emergency Department Visits through Effective Communication

Authors: Sandra Cardoso, Gaspar Pais, Judite Neves, Sandra Cavaca, Fernando Araújo

Abstract:

In 2021, Portugal has 63 emergency department (ED) visits per 100 people annually, the highest numbers in Europe. While EDs provide a critical service, high use is indicative of inappropriate and inefficient healthcare. In Portugal, all ED have the Manchester Triage System (MTS), a clinical risk management tool to enable that patients are seen in order of clinical priority. In 2023, more than 40% of the ED visits were of non-urgent conditions (blue and green), that could be better managed in primary health care (PHC), meaning wrong use of resources and lack of health literacy. From 2017, the country has a phone line, SNS24 (Contact Centre of the National Health Service), for triage, counseling, and referral service, 24 hours/7 days a week. The pilot project ‘Call before, save lives’ was implemented in the municipalities of Póvoa de Varzim and Vila do Conde (around 150.000 residents), in May 2023, by the executive board of the Portuguese Health Service, with the support of the Shared Services of the Ministry of Health, and local authorities. This geographical area has short travel times, 99% of the population a family doctor and the region is organized in a health local unit (HLU), integrating PHC and the local hospital. The purposes of this project included to increase awareness to contact SNS 24, before going to an ED, and non-urgent conditions oriented to a family doctor, reducing ED visits. The implementation of the project involved two phases, beginning with: i) development of campaigns using local influencers (fishmonger, model, fireman) through local institutions and media; ii) provision of telephone installed on site to contact SNS24; iii) establishment of open consultation in PHC; iv) promotion of the use of SNS24; v) creation of acute consultations at the hospital for complex chronic patients; and vi) direct referral for home hospitalization by PHC. The results of this project showed an excellent level of access to SNS24, an increase in the number of users referred to ED, with great satisfaction of users and professionals. The second phase, initiated in January 2024, for access to the ED, the need for prior referral was established as an admission rule, except for certain situations, as trauma patients. If the patient refuses, their registration in the ED and subsequent screening in accordance with the MTS must be ensured. When the patient is non-urgent, shall not be observed in the ED, provided that, according to his clinical condition, is guaranteed to be referred to PHC or to consultation/day hospital, through effective scheduling of an appointment for the same or the following day. In terms of results, 8 weeks after beginning of phase 2, we assist of a decrease in self-reported patients to ED from 59% to 15%, and a reduction of around 7% of ED visits. The key for this success was an effective public campaign that increases the knowledge of the right use of the health system, and capable of changing behaviors.

Keywords: contact centre of the national health service, emergency department visits, public campaign, health literacy, SNS24

Procedia PDF Downloads 67
350 Multi-Agent System Based Distributed Voltage Control in Distribution Systems

Authors: A. Arshad, M. Lehtonen. M. Humayun

Abstract:

With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.

Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids

Procedia PDF Downloads 312
349 Foslip Loaded and CEA-Affimer Functionalised Silica Nanoparticles for Fluorescent Imaging of Colorectal Cancer Cells

Authors: Yazan S. Khaled, Shazana Shamsuddin, Jim Tiernan, Mike McPherson, Thomas Hughes, Paul Millner, David G. Jayne

Abstract:

Introduction: There is a need for real-time imaging of colorectal cancer (CRC) to allow tailored surgery to the disease stage. Fluorescence guided laparoscopic imaging of primary colorectal cancer and the draining lymphatics would potentially bring stratified surgery into clinical practice and realign future CRC management to the needs of patients. Fluorescent nanoparticles can offer many advantages in terms of intra-operative imaging and therapy (theranostic) in comparison with traditional soluble reagents. Nanoparticles can be functionalised with diverse reagents and then targeted to the correct tissue using an antibody or Affimer (artificial binding protein). We aimed to develop and test fluorescent silica nanoparticles and targeted against CRC using an anti-carcinoembryonic antigen (CEA) Affimer (Aff). Methods: Anti-CEA and control Myoglobin Affimer binders were subcloned into the expressing vector pET11 followed by transformation into BL21 Star™ (DE3) E.coli. The expression of Affimer binders was induced using 0.1 mM isopropyl β-D-1-thiogalactopyranoside (IPTG). Cells were harvested, lysed and purified using nickle chelating affinity chromatography. The photosensitiser Foslip (soluble analogue of 5,10,15,20-Tetra(m-hydroxyphenyl) chlorin) was incorporated into the core of silica nanoparticles using water-in-oil microemulsion technique. Anti-CEA or control Affs were conjugated to silica nanoparticles surface using sulfosuccinimidyl-4-(N-maleimidomethyl) cyclohexane-1-carboxylate (sulfo SMCC) chemical linker. Binding of CEA-Aff or control nanoparticles to colorectal cancer cells (LoVo, LS174T and HC116) was quantified in vitro using confocal microscopy. Results: The molecular weights of the obtained band of Affimers were ~12.5KDa while the diameter of functionalised silica nanoparticles was ~80nm. CEA-Affimer targeted nanoparticles demonstrated 9.4, 5.8 and 2.5 fold greater fluorescence than control in, LoVo, LS174T and HCT116 cells respectively (p < 0.002) for the single slice analysis. A similar pattern of successful CEA-targeted fluorescence was observed in the maximum image projection analysis, with CEA-targeted nanoparticles demonstrating 4.1, 2.9 and 2.4 fold greater fluorescence than control particles in LoVo, LS174T, and HCT116 cells respectively (p < 0.0002). There was no significant difference in fluorescence for CEA-Affimer vs. CEA-Antibody targeted nanoparticles. Conclusion: We are the first to demonstrate that Foslip-doped silica nanoparticles conjugated to anti-CEA Affimers via SMCC allowed tumour cell-specific fluorescent targeting in vitro, and had shown sufficient promise to justify testing in an animal model of colorectal cancer. CEA-Affimer appears to be a suitable targeting molecule to replace CEA-Antibody. Targeted silica nanoparticles loaded with Foslip photosensitiser is now being optimised to drive photodynamic killing, via reactive oxygen generation.

Keywords: colorectal cancer, silica nanoparticles, Affimers, antibodies, imaging

Procedia PDF Downloads 240
348 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation

Authors: Matthias Leitner, Gernot Pottlacher

Abstract:

Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.

Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion

Procedia PDF Downloads 219
347 Soft Pneumatic Actuators Fabricated Using Soluble Polymer Inserts and a Single-Pour System for Improved Durability

Authors: Alexander Harrison Greer, Edward King, Elijah Lee, Safa Obuz, Ruhao Sun, Aditya Sardesai, Toby Ma, Daniel Chow, Bryce Broadus, Calvin Costner, Troy Barnes, Biagio DeSimone, Yeshwin Sankuratri, Yiheng Chen, Holly Golecki

Abstract:

Although a relatively new field, soft robotics is experiencing a rise in applicability in the secondary school setting through The Soft Robotics Toolkit, shared fabrication resources and a design competition. Exposing students outside of university research groups to this rapidly growing field allows for development of the soft robotics industry in new and imaginative ways. Soft robotic actuators have remained difficult to implement in classrooms because of their relative cost or difficulty of fabrication. Traditionally, a two-part molding system is used; however, this configuration often results in delamination. In an effort to make soft robotics more accessible to young students, we aim to develop a simple, single-mold method of fabricating soft robotic actuators from common household materials. These actuators are made by embedding a soluble polymer insert into silicone. These inserts can be made from hand-cut polystyrene, 3D-printed polyvinyl alcohol (PVA) or acrylonitrile butadiene styrene (ABS), or molded sugar. The insert is then dissolved using an appropriate solvent such as water or acetone, leaving behind a negative form which can be pneumatically actuated. The resulting actuators are seamless, eliminating the instability of adhering multiple layers together. The benefit of this approach is twofold: it simplifies the process of creating a soft robotic actuator, and in turn, increases its effectiveness and durability. To quantify the increased durability of the single-mold actuator, it was tested against the traditional two-part mold. The single-mold actuator could withstand actuation at 20psi for 20 times the duration when compared to the traditional method. The ease of fabrication of these actuators makes them more accessible to hobbyists and students in classrooms. After developing these actuators, they were applied, in collaboration with a ceramics teacher at our school, to a glove used to transfer nuanced hand motions used to throw pottery from an expert artist to a novice. We quantified the improvement in the users’ pottery-making skill when wearing the glove using image analysis software. The seamless actuators proved to be robust in this dynamic environment. Seamless soft robotic actuators created by high school students show the applicability of the Soft Robotics Toolkit for secondary STEM education and outreach. Making students aware of what is possible through projects like this will inspire the next generation of innovators in materials science and robotics.

Keywords: pneumatic actuator fabrication, soft robotic glove, soluble polymers, STEM outreach

Procedia PDF Downloads 134
346 Accelerating Personalization Using Digital Tools to Drive Circular Fashion

Authors: Shamini Dhana, G. Subrahmanya VRK Rao

Abstract:

The fashion industry is advancing towards a mindset of zero waste, personalization, creativity, and circularity. The trend of upcycling clothing and materials into personalized fashion is being demanded by the next generation. There is a need for a digital tool to accelerate the process towards mass customization. Dhana’s D/Sphere fashion technology platform uses digital tools to accelerate upcycling. In essence, advanced fashion garments can be designed and developed via reuse, repurposing, recreating activities, and using existing fabric and circulating materials. The D/Sphere platform has the following objectives: to provide (1) An opportunity to develop modern fashion using existing, finished materials and clothing without chemicals or water consumption; (2) The potential for an everyday customer and designer to use the medium of fashion for creative expression; (3) A solution to address the global textile waste generated by pre- and post-consumer fashion; (4) A solution to reduce carbon emissions, water, and energy consumption with the participation of all stakeholders; (5) An opportunity for brands, manufacturers, retailers to work towards zero-waste designs and as an alternative revenue stream. Other benefits of this alternative approach include sustainability metrics, trend prediction, facilitation of disassembly and remanufacture deep learning, and hyperheuristics for high accuracy. A design tool for mass personalization and customization utilizing existing circulating materials and deadstock, targeted to fashion stakeholders will lower environmental costs, increase revenues through up to date upcycled apparel, produce less textile waste during the cut-sew-stitch process, and provide a real design solution for the end customer to be part of circular fashion. The broader impact of this technology will result in a different mindset to circular fashion, increase the value of the product through multiple life cycles, find alternatives towards zero waste, and reduce the textile waste that ends up in landfills. This technology platform will be of interest to brands and companies that have the responsibility to reduce their environmental impact and contribution to climate change as it pertains to the fashion and apparel industry. Today, over 70% of the $3 trillion fashion and apparel industry ends up in landfills. To this extent, the industry needs such alternative techniques to both address global textile waste as well as provide an opportunity to include all stakeholders and drive circular fashion with new personalized products. This type of modern systems thinking is currently being explored around the world by the private sector, organizations, research institutions, and governments. This technological innovation using digital tools has the potential to revolutionize the way we look at communication, capabilities, and collaborative opportunities amongst stakeholders in the development of new personalized and customized products, as well as its positive impacts on society, our environment, and global climate change.

Keywords: circular fashion, deep learning, digital technology platform, personalization

Procedia PDF Downloads 66
345 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 131
344 The Use of Remotely Sensed Data to Model Habitat Selections of Pileated Woodpeckers (Dryocopus pileatus) in Fragmented Landscapes

Authors: Ruijia Hu, Susanna T.Y. Tong

Abstract:

Light detection and ranging (LiDAR) and four-channel red, green, blue, and near-infrared (RGBI) remote sensed imageries allow an accurate quantification and contiguous measurement of vegetation characteristics and forest structures. This information facilitates the generation of habitat structure variables for forest species distribution modelling. However, applications of remote sensing data, especially the combination of structural and spectral information, to support evidence-based decisions in forest managements and conservation practices at local scale are not widely adopted. In this study, we examined the habitat requirements of pileated woodpecker (Dryocopus pileatus) (PW) in Hamilton County, Ohio, using ecologically relevant forest structural and vegetation characteristics derived from LiDAR and RGBI data. We hypothesized that the habitat of PW is shaped by vegetation characteristics that are directly associated with the availability of food, hiding and nesting resources, the spatial arrangement of habitat patches within home range, as well as proximity to water sources. We used 186 PW presence or absence locations to model their presence and absence in generalized additive model (GAM) at two scales, representing foraging and home range size, respectively. The results confirm PW’s preference for tall and large mature stands with structural complexity, typical of late-successional or old-growth forests. Besides, the crown size of dead trees shows a positive relationship with PW occurrence, therefore indicating the importance of declining living trees or early-stage dead trees within PW home range. These locations are preferred by PW for nest cavity excavation as it attempts to balance the ease of excavation and tree security. In addition, we found that PW can adjust its travel distance to the nearest water resource, suggesting that habitat fragmentation can have certain impacts on PW. Based on our findings, we recommend that forest managers should use different priorities to manage nesting, roosting, and feeding habitats. Particularly, when devising forest management and hazard tree removal plans, one needs to consider retaining enough cavity trees within high-quality PW habitat. By mapping PW habitat suitability for the study area, we highlight the importance of riparian corridor in facilitating PW to adjust to the fragmented urban landscape. Indeed, habitat improvement for PW in the study area could be achieved by conserving riparian corridors and promoting riparian forest succession along major rivers in Hamilton County.

Keywords: deadwood detection, generalized additive model, individual tree crown delineation, LiDAR, pileated woodpecker, RGBI aerial imagery, species distribution models

Procedia PDF Downloads 53
343 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam

Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck

Abstract:

The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.

Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam

Procedia PDF Downloads 246
342 Leadership Values in Succession Processes

Authors: Peter Heimerl, Alexander Plaikner, Mike Peters

Abstract:

Background and Significance of the Study: Family-run businesses are a decisive economic factor in the Alpine tourism and leisure industry. Within the next years, it is expected that a large number of family-run small and medium-sized businesses will transfer ownership due to demographic developments. Four stages of succession processes can be identified by several empirical studies: (1) the preparation phase, (2) the succession planning phase, (3) the development of the succession concept, (4) and the implementation of the business transfer. Family business research underlines the importance of individual's and family’s values: Especially leadership values address mainly the first phase, which strongly determines the following stages. Aim of the Study: The study aims at answering the following research question: Which leadership values are dominating during succession processes in family-run businesses in Austrian Alpine tourism industry? Methodology: Twenty-two problem-centred individual interviews with 11 transferors and their 11 transferees were conducted. Data analysis was carried out using the software program MAXQDA following an inductive approach to data coding. Major Findings: Data analysis shows that nine values particularly influence succession processes, especially during the vulnerable preparation phase. Participation is the most-dominant value (162 references). It covers a style of cooperation, communication, and controlling. Discipline (142) is especially prevailing from the transferor's perspective. It addresses entrepreneurial honesty and customer orientation. Development (138) is seen as an important value, but it can be distinguished between transferors and transferees. These are mainly focused on strategic positioning and new technologies. Trust (105) is interpreted as a basic prerequisite to run the family firm smoothly. Interviewees underline the importance to be able to take a break from family-business management; however, this is only possible when openness and honesty constitute trust within the family firm. Loyalty (102): Almost all interviewees perceive that they can influence the loyalty of the employees through their own role models. A good work-life balance (90) is very important to most of the transferors, especially for their employees. Despite the communicated importance of a good work-life-balance, but however, mostly the commitment to the company is prioritised. Considerations of regionality (82) and regional responsibility are also frequently raised. Appreciation (75) is of great importance to both the handover and the takeover generation -as appreciation towards the employees in the company and especially in connection with the family. Familiarity (66) and the blurring of the boundaries between private and professional life are very common, especially in family businesses. Familial contact and open communication with employees which is mentioned in almost all handing over. Conclusions: In the preparation phase of succession, successors and incumbents have to consider and discuss their leadership and family values of family-business management. Quite often, assistance is needed to commonly and openly discuss these values in the early stages of succession processes. A large majority of handovers fail because of these values. Implications can be drawn to support family businesses, e.g., consulting initiatives at chambers of commerce and business consultancies must address this problem.

Keywords: leadership values, family business, succession processes, succession phases

Procedia PDF Downloads 98
341 Seasonal Variability of Picoeukaryotes Community Structure Under Coastal Environmental Disturbances

Authors: Benjamin Glasner, Carlos Henriquez, Fernando Alfaro, Nicole Trefault, Santiago Andrade, Rodrigo De La Iglesia

Abstract:

A central question in ecology refers to the relative importance that local-scale variables have over community composition, when compared with regional-scale variables. In coastal environments, strong seasonal abiotic influence dominates these systems, weakening the impact of other parameters like micronutrients. After the industrial revolution, micronutrients like trace metals have increased in ocean as pollutants, with strong effects upon biotic entities and biological processes in coastal regions. Coastal picoplankton communities had been characterized as a cyanobacterial dominated fraction, but in recent years the eukaryotic component of this size fraction has gained relevance due to their high influence in carbon cycle, although, diversity patterns and responses to disturbances are poorly understood. South Pacific upwelling coastal environments represent an excellent model to study seasonal changes due to a strong influence in the availability of macro- and micronutrients between seasons. In addition, some well constrained coastal bays of this region have been subjected to strong disturbances due to trace metal inputs. In this study, we aim to compare the influence of seasonality and trace metals concentrations, on the community structure of planktonic picoeukaryotes. To describe seasonal patterns in the study area, satellite data in a 6 years time series and in-situ measurements with a traditional oceanographic approach such as CTDO equipment were performed. In addition, trace metal concentrations were analyzed trough ICP-MS analysis, for the same region. For biological data collection, field campaigns were performed in 2011-2012 and the picoplankton community was described by flow cytometry and taxonomical characterization with next-generation sequencing of ribosomal genes. The relation between the abiotic and biotic components was finally determined by multivariate statistical analysis. Our data show strong seasonal fluctuations in abiotic parameters such as photosynthetic active radiation and superficial sea temperature, with a clear differentiation of seasons. However, trace metal analysis allows identifying strong differentiation within the study area, dividing it into two zones based on trace metals concentration. Biological data indicate that there are no major changes in diversity but a significant fluctuation in evenness and community structure. These changes are related mainly with regional parameters, like temperature, but by analyzing the metal influence in picoplankton community structure, we identify a differential response of some plankton taxa to metal pollution. We propose that some picoeukaryotic plankton groups respond differentially to metal inputs, by changing their nutritional status and/or requirements under disturbances as a derived outcome of toxic effects and tolerance.

Keywords: Picoeukaryotes, plankton communities, trace metals, seasonal patterns

Procedia PDF Downloads 173
340 Reliability and Availability Analysis of Satellite Data Reception System using Reliability Modeling

Authors: Ch. Sridevi, S. P. Shailender Kumar, B. Gurudayal, A. Chalapathi Rao, K. Koteswara Rao, P. Srinivasulu

Abstract:

System reliability and system availability evaluation plays a crucial role in ensuring the seamless operation of complex satellite data reception system with consistent performance for longer periods. This paper presents a novel approach for the same using a case study on one of the antenna systems at satellite data reception ground station in India. The methodology involves analyzing system's components, their failure rates, system's architecture, generation of logical reliability block diagram model and estimating the reliability of the system using the component level mean time between failures considering exponential distribution to derive a baseline estimate of the system's reliability. The model is then validated with collected system level field failure data from the operational satellite data reception systems that includes failure occurred, failure time, criticality of the failure and repair times by using statistical techniques like median rank, regression and Weibull analysis to extract meaningful insights regarding failure patterns and practical reliability of the system and to assess the accuracy of the developed reliability model. The study mainly focused on identification of critical units within the system, which are prone to failures and have a significant impact on overall performance and brought out a reliability model of the identified critical unit. This model takes into account the interdependencies among system components and their impact on overall system reliability and provides valuable insights into the performance of the system to understand the Improvement or degradation of the system over a period of time and will be the vital input to arrive at the optimized design for future development. It also provides a plug and play framework to understand the effect on performance of the system in case of any up gradations or new designs of the unit. It helps in effective planning and formulating contingency plans to address potential system failures, ensuring the continuity of operations. Furthermore, to instill confidence in system users, the duration for which the system can operate continuously with the desired level of 3 sigma reliability was estimated that turned out to be a vital input to maintenance plan. System availability and station availability was also assessed by considering scenarios of clash and non-clash to determine the overall system performance and potential bottlenecks. Overall, this paper establishes a comprehensive methodology for reliability and availability analysis of complex satellite data reception systems. The results derived from this approach facilitate effective planning contingency measures, and provide users with confidence in system performance and enables decision-makers to make informed choices about system maintenance, upgrades and replacements. It also aids in identifying critical units and assessing system availability in various scenarios and helps in minimizing downtime and optimizing resource allocation.

Keywords: exponential distribution, reliability modeling, reliability block diagram, satellite data reception system, system availability, weibull analysis

Procedia PDF Downloads 84
339 Impact of Climate Change on Irrigation and Hydropower Potential: A Case of Upper Blue Nile Basin in Western Ethiopia

Authors: Elias Jemal Abdella

Abstract:

The Blue Nile River is an important shared resource of Ethiopia, Sudan and also, because it is the major contributor of water to the main Nile River, Egypt. Despite the potential benefits of regional cooperation and integrated joint basin management, all three countries continue to pursue unilateral plans for development. Besides, there is great uncertainty about the likely impacts of climate change in water availability for existing as well as proposed irrigation and hydropower projects in the Blue Nile Basin. The main objective of this study is to quantitatively assess the impact of climate change on the hydrological regime of the upper Blue Nile basin, western Ethiopia. Three models were combined, a dynamic Coordinated Regional Climate Downscaling Experiment (CORDEX) regional climate model (RCM) that is used to determine climate projections for the Upper Blue Nile basin for Representative Concentration Pathways (RCPs) 4.5 and 8.5 greenhouse gas emissions scenarios for the period 2021-2050. The outputs generated from multimodel ensemble of four (4) CORDEX-RCMs (i.e., rainfall and temperature) were used as input to a Soil and Water Assessment Tool (SWAT) hydrological model which was setup, calibrated and validated with observed climate and hydrological data. The outputs from the SWAT model (i.e., projections in river flow) were used as input to a Water Evaluation and Planning (WEAP) water resources model which was used to determine the water resources implications of the changes in climate. The WEAP model was set-up to simulate three development scenarios. Current Development scenario was the existing water resource development situation, Medium-term Development scenario was planned water resource development that is expected to be commissioned (i.e. before 2025) and Long-term full Development scenario were all planned water resource development likely to be commissioned (i.e. before 2050). The projected change of mean annual temperature for period (2021 – 2050) in most of the basin are warmer than the baseline (1982 -2005) average in the range of 1 to 1.4oC, implying that an increase in evapotranspiration loss. Subbasins which already distressed from drought may endure to face even greater challenges in the future. Projected mean annual precipitation varies from subbasin to subbasin; in the Eastern, North Eastern and South western highland of the basin a likely increase of mean annual precipitation up to 7% whereas in the western lowland part of the basin mean annual precipitation projected to decrease by 3%. The water use simulation indicates that currently irrigation demand in the basin is 1.29 Bm3y-1 for 122,765 ha of irrigation area. By 2025, with new schemes being developed, irrigation demand is estimated to increase to 2.5 Bm3y-1 for 277,779 ha. By 2050, irrigation demand in the basin is estimated to increase to 3.4 Bm3y-1 for 372,779 ha. The hydropower generation simulation indicates that 98 % of hydroelectricity potential could be produced if all planned dams are constructed.

Keywords: Blue Nile River, climate change, hydropower, SWAT, WEAP

Procedia PDF Downloads 355
338 The Influence of Minority Stress on Depression among Thai Lesbian, Gay, Bisexual, and Transgender Adults

Authors: Priyoth Kittiteerasack, Alana Steffen, Alicia K. Matthews

Abstract:

Depression is a leading cause of the worldwide burden of disability and disease burden. Notably, lesbian, gay, bisexual, and transgender (LGBT) populations are more likely to be a high-risk group for depression compared to their heterosexual and cisgender counterparts. To date, little is known about the rates and predictors of depression among Thai LGBT populations. As such, the purpose of this study was to: 1) measure the prevalence of depression among a diverse sample of Thai LGBT adults and 2) determine the influence of minority stress variables (discrimination, victimization, internalized homophobia, and identity concealment), general stress (stress and loneliness), and coping strategies (problem-focused, avoidance, and seeking social support) on depression outcomes. This study was guided by the Minority Stress Model (MSM). The MSM posits that elevated rates of mental health problems among LGBT populations stem from increased exposures to social stigma due to their membership in a stigmatized minority group. Social stigma, including discrimination and violence, represents unique sources of stress for LGBT individuals and have a direct impact on mental health. This study was conducted as part of a larger descriptive study of mental health among Thai LGBT adults. Standardized measures consistent with the MSM were selected and translated into the Thai language by a panel of LGBT experts using the forward and backward translation technique. The psychometric properties of translated instruments were tested and acceptable (Cronbach’s alpha > .8 and Content Validity Index = 1). Study participants were recruited using convenience and snowball sampling methods. Self-administered survey data were collected via an online survey and via in-person data collection conducted at a leading Thai LGBT organization. Descriptive statistics and multivariate analyses using multiple linear regression models were conducted to analyze study data. The mean age of participants (n = 411) was 29.5 years (S.D. = 7.4). Participants were primarily male (90.5%), homosexual (79.3%), and cisgender (76.6%). The mean score for depression of study participant was 9.46 (SD = 8.43). Forty-three percent of LGBT participants reported clinically significant levels of depression as measured by the Beck Depression Inventory. In multivariate models, the combined influence of demographic, stress, coping, and minority stressors explained 47.2% of the variance in depression scores (F(16,367) = 20.48, p < .001). Minority stressors independently associated with depression included discrimination (β = .43, p < .01) victimization (β = 1.53, p < .05), and identity concealment (β = -.54, p < .05). In addition, stress (β = .81, p < .001), history of a chronic disease (β = 1.20, p < .05), and coping strategies (problem-focused coping β = -1.88, p < .01, seeking social support β = -1.12, p < .05, and avoidance coping β = 2.85, p < .001) predicted depression scores. The study outcomes emphasized that minority stressors uniquely contributed to depression levels among Thai LGBT participants over and above typical non-minority stressors. Study findings have important implications for nursing practice and the development of intervention research.

Keywords: depression, LGBT, minority stress, sexual and gender minority, Thailand

Procedia PDF Downloads 127
337 Interdigitated Flexible Li-Ion Battery by Aerosol Jet Printing

Authors: Yohann R. J. Thomas, Sébastien Solan

Abstract:

Conventional battery technology includes the assembly of electrode/separator/electrode by standard techniques such as stacking or winding, depending on the format size. In that type of batteries, coating or pasting techniques are only used for the electrode process. The processes are suited for large scale production of batteries and perfectly adapted to plenty of application requirements. Nevertheless, as the demand for both easier and cost-efficient production modes, flexible, custom-shaped and efficient small sized batteries is rising. Thin-film, printable batteries are one of the key areas for printed electronics. In the frame of European BASMATI project, we are investigating the feasibility of a new design of lithium-ion battery: interdigitated planar core design. Polymer substrate is used to produce bendable and flexible rechargeable accumulators. Direct fully printed batteries lead to interconnect the accumulator with other electronic functions for example organic solar cells (harvesting function), printed sensors (autonomous sensors) or RFID (communication function) on a common substrate to produce fully integrated, thin and flexible new devices. To fulfill those specifications, a high resolution printing process have been selected: Aerosol jet printing. In order to fit with this process parameters, we worked on nanomaterials formulation for current collectors and electrodes. In addition, an advanced printed polymer-electrolyte is developed to be implemented directly in the printing process in order to avoid the liquid electrolyte filling step and to improve safety and flexibility. Results: Three different current collectors has been studied and printed successfully. An ink of commercial copper nanoparticles has been formulated and printed, then a flash sintering was applied to the interdigitated design. A gold ink was also printed, the resulting material was partially self-sintered and did not require any high temperature post treatment. Finally, carbon nanotubes were also printed with a high resolution and well defined patterns. Different electrode materials were formulated and printed according to the interdigitated design. For cathodes, NMC and LFP were efficaciously printed. For anodes, LTO and graphite have shown to be good candidates for the fully printed battery. The electrochemical performances of those materials have been evaluated in a standard coin cell with lithium-metal counter electrode and the results are similar with those of a traditional ink formulation and process. A jellified plastic crystal solid state electrolyte has been developed and showed comparable performances to classical liquid carbonate electrolytes with two different materials. In our future developments, focus will be put on several tasks. In a first place, we will synthesize and formulate new specific nano-materials based on metal-oxyde. Then a fully printed device will be produced and its electrochemical performance will be evaluated.

Keywords: high resolution digital printing, lithium-ion battery, nanomaterials, solid-state electrolytes

Procedia PDF Downloads 251
336 Framework Proposal on How to Use Game-Based Learning, Collaboration and Design Challenges to Teach Mechatronics

Authors: Michael Wendland

Abstract:

This paper presents a framework to teach a methodical design approach by the help of using a mixture of game-based learning, design challenges and competitions as forms of direct assessment. In today’s world, developing products is more complex than ever. Conflicting goals of product cost and quality with limited time as well as post-pandemic part shortages increase the difficulty. Common design approaches for mechatronic products mitigate some of these effects by helping the users with their methodical framework. Due to the inherent complexity of these products, the number of involved resources and the comprehensive design processes, students very rarely have enough time or motivation to experience a complete approach in one semester course. But, for students to be successful in the industrial world, it is crucial to know these methodical frameworks and to gain first-hand experience. Therefore, it is necessary to teach these design approaches in a real-world setting and keep the motivation high as well as learning to manage upcoming problems. This is achieved by using a game-based approach and a set of design challenges that are given to the students. In order to mimic industrial collaboration, they work in teams of up to six participants and are given the main development target to design a remote-controlled robot that can manipulate a specified object. By setting this clear goal without a given solution path, a constricted time-frame and limited maximal cost, the students are subjected to similar boundary conditions as in the real world. They must follow the methodical approach steps by specifying requirements, conceptualizing their ideas, drafting, designing, manufacturing and building a prototype using rapid prototyping. At the end of the course, the prototypes will be entered into a contest against the other teams. The complete design process is accompanied by theoretical input via lectures which is immediately transferred by the students to their own design problem in practical sessions. To increase motivation in these sessions, a playful learning approach has been chosen, i.e. designing the first concepts is supported by using lego construction kits. After each challenge, mandatory online quizzes help to deepen the acquired knowledge of the students and badges are awarded to those who complete a quiz, resulting in higher motivation and a level-up on a fictional leaderboard. The final contest is held in presence and involves all teams with their functional prototypes that now need to contest against each other. Prices for the best mechanical design, the most innovative approach and for the winner of the robotic contest are awarded. Each robot design gets evaluated with regards to the specified requirements and partial grades are derived from the results. This paper concludes with a critical review of the proposed framework, the game-based approach for the designed prototypes, the reality of the boundary conditions, the problems that occurred during the design and manufacturing process, the experiences and feedback of the students and the effectiveness of their collaboration as well as a discussion of the potential transfer to other educational areas.

Keywords: design challenges, game-based learning, playful learning, methodical framework, mechatronics, student assessment, constructive alignment

Procedia PDF Downloads 67
335 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)

Authors: Eric Pla Erra, Mariana Jimenez Martinez

Abstract:

While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.

Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)

Procedia PDF Downloads 105