Search results for: twin therapeutic approach
3117 Cooperative Robot Application in a Never Explored or an Abandoned Sub-Surface Mine
Authors: Michael K. O. Ayomoh, Oyindamola A. Omotuyi
Abstract:
Autonomous mobile robots deployed to explore or operate in a never explored or an abandoned sub-surface mine requires extreme effectiveness in coordination and communication. In a bid to transmit information from the depth of the mine to the external surface in real-time and amidst diverse physical, chemical and virtual impediments, the concept of unified cooperative robots is seen to be a proficient approach. This paper presents an effective [human → robot → task] coordination framework for effective exploration of an abandoned underground mine. The problem addressed in this research is basically the development of a globalized optimization model premised on time series differentiation and geometrical configurations for effective positioning of the two classes of robots in the cooperation namely the outermost stationary master (OSM) robots and the innermost dynamic task (IDT) robots for effective bi-directional signal transmission. In addition, the synchronization of a vision system and wireless communication system for both categories of robots, fiber optics system for the OSM robots in cases of highly sloppy or vertical mine channels and an autonomous battery recharging capability for the IDT robots further enhanced the proposed concept. The OSM robots are the master robots which are positioned at strategic locations starting from the mine open surface down to its base using a fiber-optic cable or a wireless communication medium all subject to the identified mine geometrical configuration. The OSM robots are usually stationary and function by coordinating the transmission of signals from the IDT robots at the base of the mine to the surface and in a reverse order based on human decisions at the surface control station. The proposed scheme also presents an optimized number of robots required to form the cooperation in a bid to reduce overall operational cost and system complexity.Keywords: sub-surface mine, wireless communication, outermost stationary master robots, inner-most dynamic robots, fiber optic
Procedia PDF Downloads 2133116 Prevalence and Risk Factors of Metabolic Syndrome in Adults of Terai Region of Nepal
Authors: Birendra Kumar Jha, Mingma L. Sherpa, Binod Kumar Dahal
Abstract:
Background: The metabolic syndrome is emerging as a major public health concern in the world. Urbanization, surplus energy uptake, compounded by decreased physical activities, and increasing obesity are the major factors contributing to the epidemic of metabolic syndrome worldwide. However, prevalence of metabolic syndrome and its risk factors are little studied in Terai region of Nepal. The objectives of this research were to estimate the prevalence and to identify the risk factors of metabolic syndrome among adults in Terai region of Nepal. Method: We used a community based cross sectional study design. A total of 225 adults (age: 18 to 80 years) were selected from three district of Terai region of Nepal using cluster sampling by camp approach. IDF criteria (central obesity with any two of following four factors: triglycerides ≥ 150 mg/dl or specific treatment for lipid abnormality, reduced HDL, raised blood pressure and raised fasting plasma glucose or previously diagnosed type 2 diabetes) were used to assess metabolic syndrome. Interview, physical and clinical examination, measurement of fasting blood glucose and lipid profile were conducted for all participants. Chi-square test and multivariable logistic regression were employed to explore the risk factors of metabolic syndrome. Result: The overall prevalence of metabolic syndrome was 70.7%. Hypertension, increased fasting blood sugar, increased triglycerides and decreased HDL were observed in 50.7%, 32.4%, 41.8% and 79.1% of the subjects respectively. Socio-economic and behavioral risk factors significantly associated with metabolic syndrome were gender male (OR=2.56, 955 CI: 1.42-4.63; p=0.002), in service or retired from service (OR=3.72, 95% CI: 1.72-8.03; p=0.001) and smoking (OR= 4.10, 95% CI: 1.19-14.07; p=0.016). Conclusion: Higher prevalence of Metabolic syndrome along with presence of behavioral risk factors in Terai region of Nepal likely suggest lack of awareness and health promotion activities for metabolic syndrome and indicate the need to promote public health programs in this region to maintain quality of life.Keywords: metabolic syndrome, Nepal, prevalence, risk factors, Terai
Procedia PDF Downloads 1493115 Optimal Tamping for Railway Tracks, Reducing Railway Maintenance Expenditures by the Use of Integer Programming
Authors: Rui Li, Min Wen, Kim Bang Salling
Abstract:
For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euros per kilometer per year. In order to reduce such maintenance expenditures, this paper presents a mixed 0-1 linear mathematical model designed to optimize the predictive railway tamping activities for ballast track in the planning horizon of three to four years. The objective function is to minimize the tamping machine actual costs. The approach of the research is using the simple dynamic model for modelling condition-based tamping process and the solution method for finding optimal condition-based tamping schedule. Seven technical and practical aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality recovery on the track quality after tamping operation; (5) Tamping machine operation practices (6) tamping budgets and (7) differentiating the open track from the station sections. A Danish railway track between Odense and Fredericia with 42.6 km of length is applied for a time period of three and four years in the proposed maintenance model. The generated tamping schedule is reasonable and robust. Based on the result from the Danish railway corridor, the total costs can be reduced significantly (50%) than the previous model which is based on optimizing the number of tamping. The different maintenance strategies have been discussed in the paper. The analysis from the results obtained from the model also shows a longer period of predictive tamping planning has more optimal scheduling of maintenance actions than continuous short term preventive maintenance, namely yearly condition-based planning.Keywords: integer programming, railway tamping, predictive maintenance model, preventive condition-based maintenance
Procedia PDF Downloads 4433114 A Convolution Neural Network Approach to Predict Pes-Planus Using Plantar Pressure Mapping Images
Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi, Morvarid Lalenoor
Abstract:
Background: Plantar pressure distribution measurement has been used for a long time to assess foot disorders. Plantar pressure is an important component affecting the foot and ankle function and Changes in plantar pressure distribution could indicate various foot and ankle disorders. Morphologic and mechanical properties of the foot may be important factors affecting the plantar pressure distribution. Accurate and early measurement may help to reduce the prevalence of pes planus. With recent developments in technology, new techniques such as machine learning have been used to assist clinicians in predicting patients with foot disorders. Significance of the study: This study proposes a neural network learning-based flat foot classification methodology using static foot pressure distribution. Methodologies: Data were collected from 895 patients who were referred to a foot clinic due to foot disorders. Patients with pes planus were labeled by an experienced physician based on clinical examination. Then all subjects (with and without pes planus) were evaluated for static plantar pressures distribution. Patients who were diagnosed with the flat foot in both feet were included in the study. In the next step, the leg length was normalized and the network was trained for plantar pressure mapping images. Findings: From a total of 895 image data, 581 were labeled as pes planus. A computational neural network (CNN) ran to evaluate the performance of the proposed model. The prediction accuracy of the basic CNN-based model was performed and the prediction model was derived through the proposed methodology. In the basic CNN model, the training accuracy was 79.14%, and the test accuracy was 72.09%. Conclusion: This model can be easily and simply used by patients with pes planus and doctors to predict the classification of pes planus and prescreen for possible musculoskeletal disorders related to this condition. However, more models need to be considered and compared for higher accuracy.Keywords: foot disorder, machine learning, neural network, pes planus
Procedia PDF Downloads 3603113 Violent Conflict and the Protection of Women from Sex and Gender-Based Violence: A Third World Feminist Critique of the United Nations Women, Peace, and Security Agenda
Authors: Seember Susan Aondoakura
Abstract:
This paper examines the international legal framework established to address the challenges women and girls experience in situations of violent conflict. The United Nations (UN) women, peace, and security agenda (hereafter WPS agenda, the Agenda) aspire to make wars safer for women. It recognizes women's agency in armed conflict and their victimization and formulates measures for their protection. The Agenda also acknowledges women's participation in conflict transformation and post-conflict reconstruction. It also calls for the involvement of women in conflict transformation, encourages the protection of women from sex and gender-based violence (SGBV), and provides relief and recovery from conflict-related SGBV. Using Third World Critical Feminist Theory, this paper argues that the WPS agenda overly focus on the protection of women from SGBV occurring in the less developed and conflict-ridden states in the global south, obscures the complicity of western states and economies to the problem, and silences the privileges that such states derive from war economies that continue to fuel conflict. This protectionist approach of the UN also obliterates other equally pressing problems in need of attention, like the high rates of economic degradation in conflict-ravaged societies of the global south. Prioritising protection also 'others' the problem, obliterating any sense of interconnections across geographical locations and situating women in the less developed economies of the global south as the victims and their men as the perpetrators. Prioritising protection ultimately situates western societies as saviours of Third World women with no recourse to their role in engendering and sustaining war. The paper demonstrates that this saviour mentality obliterates chances of any meaningful coalition between the local and the international in framing and addressing the issue, as solutions are formulated from a specific lens—the white hegemonic lens.Keywords: conflict, protection, security, SGBV
Procedia PDF Downloads 963112 Size Optimization of Microfluidic Polymerase Chain Reaction Devices Using COMSOL
Authors: Foteini Zagklavara, Peter Jimack, Nikil Kapur, Ozz Querin, Harvey Thompson
Abstract:
The invention and development of the Polymerase Chain Reaction (PCR) technology have revolutionised molecular biology and molecular diagnostics. There is an urgent need to optimise their performance of those devices while reducing the total construction and operation costs. The present study proposes a CFD-enabled optimisation methodology for continuous flow (CF) PCR devices with serpentine-channel structure, which enables the trade-offs between competing objectives of DNA amplification efficiency and pressure drop to be explored. This is achieved by using a surrogate-enabled optimisation approach accounting for the geometrical features of a CF μPCR device by performing a series of simulations at a relatively small number of Design of Experiments (DoE) points, with the use of COMSOL Multiphysics 5.4. The values of the objectives are extracted from the CFD solutions, and response surfaces created using the polyharmonic splines and neural networks. After creating the respective response surfaces, genetic algorithm, and a multi-level coordinate search optimisation function are used to locate the optimum design parameters. Both optimisation methods produced similar results for both the neural network and the polyharmonic spline response surfaces. The results indicate that there is the possibility of improving the DNA efficiency by ∼2% in one PCR cycle when doubling the width of the microchannel to 400 μm while maintaining the height at the value of the original design (50μm). Moreover, the increase in the width of the serpentine microchannel is combined with a decrease in its total length in order to obtain the same residence times in all the simulations, resulting in a smaller total substrate volume (32.94% decrease). A multi-objective optimisation is also performed with the use of a Pareto Front plot. Such knowledge will enable designers to maximise the amount of DNA amplified or to minimise the time taken throughout thermal cycling in such devices.Keywords: PCR, optimisation, microfluidics, COMSOL
Procedia PDF Downloads 1613111 Exploring Attachment Mechanisms of Sulfate-Reducing Bacteria Biofilm to X52 Carbon Steel and Effective Mitigation Through Moringa Oleifera Extract
Authors: Hadjer Didouh, Mohammed Hadj Melliani, Izzeddine Sameut Bouhaik
Abstract:
Corrosion is a serious problem in industrial installations or metallic transport pipes. Corrosion is an interfacial process controlled by several parameters. The presence of microorganisms affects the kinetics of corrosion. This type of corrosion is often referred as bio-corrosion or corrosion influenced by microorganisms (MIC). The action of a microorganism or a bacterium is carried out by the formation of biofilm following its attachment to the metal surface. The formation of biofilm isolates the metal surface from its environment and allows the bacteria to control the parameters of the metal/bacteria interface. Biofilm formation by sulfate-reducing bacteria (SRB) X52 steel, poses substantial challenges in oil and gas industry SONATRACH of Algeria. This research delves into the complex attachment mechanisms employed by SRB biofilm on X52 carbon steel and investigates strategies for effective mitigation using biocides. The exploration commences by elucidating the underlying mechanisms facilitating SRB biofilm adhesion to X52 carbon steel, considering factors such as surface morphology, electrostatic interactions, and microbial extracellular substances. Advanced microscopy and spectroscopic techniques provide a support to the attachment processes, laying the foundation for targeted mitigation strategies. The use of 100 ppm of Moringa Oleifera extract biocide as a promising approach to control and prevent SRB biofilm formation on X52 carbon steel surfaces. Green extract undergo evaluation for their effectiveness in disrupting biofilm development while ensuring the integrity of the steel substrate. Systematic analysis is conducted on the biocide's impact on the biofilm's structural integrity, microbial viability, and overall attachment strength. This two-pronged investigation aims to deepen our comprehension of SRB biofilm dynamics and contribute to the development of effective strategies for mitigating its impact on X52 carbon steel.Keywords: bio-corrosion, biofilm, attachement, metal/bacteria interface
Procedia PDF Downloads 233110 The Triad Experience: Benefits and Drawbacks of the Paired Placement of Student Teachers in Physical Education
Authors: Todd Pennington, Carol Wilkinson, Keven Prusak
Abstract:
Traditional models of student teaching practices typically involve the placement of a student teacher with an experienced mentor teacher. However, due to the ever-decreasing number of quality placements, an alternative triad approach is the paired placement of student teachers with one mentor teacher in a community of practice. This study examined the paired-placement of student teachers in physical education to determine the benefits and drawbacks after a 14-week student teaching experience. PETE students (N = 22) at a university in the United States were assigned to work in a triad with a student teaching partner and a mentor teacher, making up eleven triads for the semester. The one exception was a pair that worked for seven weeks at an elementary school and then for seven weeks at a junior high school, thus having two mentor teachers and participating in two triads. A total of 12 mentor teachers participated in the study. All student teachers and mentor teachers volunteered and agreed to participate. The student teaching experience was structured so that students engaged in: (a) individual teaching (one teaching the lesson with the other observing), (b) co-planning, and (c) peer coaching. All students and mentor teachers were interviewed at the conclusion of the experience. Using interview data, field notes, and email response data, the qualitative data was analyzed using the constant comparative method. The benefits of the paired placement experience emerged into three categories (a) quality feedback, (b) support, and (c) collaboration. The drawbacks emerged into four categories (a) unrealistic experience, (b) laziness in preparation, (c) lack of quality feedback, and (d) personality mismatch. Recommendations include: providing in-service training prior to student teaching to optimize the triad experience, ongoing seminars throughout the experience specifically designed for triads, and a hybrid model of paired placement for the first half of student teaching followed by solo student teaching for the second half of the experience.Keywords: community of practice, paired placement, physical education, student teaching
Procedia PDF Downloads 4023109 Assessing the Benefits of Super Depo Sutorejo as a Model of integration of Waste Pickers in a Sustainable City Waste Management
Authors: Yohanes Kambaru Windi, Loetfia Dwi Rahariyani, Dyah Wijayanti, Eko Rustamaji
Abstract:
Surabaya, the second largest city in Indonesia, has been struggling for years with waste production and its management. Nearly 11,000 tons of waste are generated daily by domestic, commercial and industrial areas. It is predicted that approximately 1,300 tons of waste overflew the Benowo Landfill daily in 2013 and projected that the landfill operation will be critical in 2015. The Super Depo Sutorejo (SDS) is a pilot project on waste management launched by the government of Surabaya in March 2013. The project is aimed to reduce the amount of waste dumped in landfill by sorting the recyclable and organic waste for composting by employing waste pickers to sort the waste before transported to landfill. This study is intended to assess the capacity of SDS to process and reduce waste and its complementary benefits. It also overviews the benefits of the project to the waste pickers in term of satisfaction to the job. Waste processing data-sheets were used to assess the difference between input and outputs waste. A survey was distributed to 30 waste pickers and interviews were conducted as a further insight on a particular issue. The analysis showed that SDS enable to reduce waste up to 50% before dumped in the final disposal area. The cost-benefits analysis using cost differential calculation revealed the economic benefit is considerable low, but composting may substitute tangible benefits for maintain the city’s parks. Waste pickers are mostly satisfied with their job (i.e. Salary, health coverage, job security), services and facilities available in SDS and enjoyed rewarding social life within the project. It is concluded that SDS is an effective and efficient model for sustainable waste management and reliable to be developed in developing countries. It is a strategic approach to empower and open up working opportunity for the poor urban community and prolong the operation of landfills.Keywords: cost-benefits, integration, satisfaction, waste management
Procedia PDF Downloads 4763108 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network
Authors: Abdulaziz Alsadhan, Naveed Khan
Abstract:
In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion Detection System (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw data set for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. These optimal feature subset used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.Keywords: Particle Swarm Optimization (PSO), Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP)
Procedia PDF Downloads 3673107 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kumar Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform
Procedia PDF Downloads 1153106 Existential Suffering in the Daily Lives of Those Living with Palliative Care Needs Arising from Chronic Obstructive Pulmonary Disease
Authors: Louise Elizabeth Bolton
Abstract:
Statement of the problem: There are an estimated 328 million cases of COPD worldwide. It is likely to become the third biggest cause of death by 2030. The impact of living with palliative care needs arising from COPD disrupts an individual’s existential situation. Understandings of individuals' existential situations within COPD are limited within the research literature and are rarely addressed within clinical practice, yet existential suffering has been linked to poor health-related quality of life for those living with other chronic conditions. The purpose of this integrative review is to provide a synthesis of existing evidence on existential suffering for those living with palliative care needs arising from COPD. Methods: This is an integrative review undertaken in accordance with PRISMA guidelines. Nine electronic databases were searched from April 2019 to January 2021. Thirty-five empirical research papers of both qualitative and quantitative methodologies, alongside systematic literature reviews, were included. Data analysis was undertaken using an integrative thematic analysis approach. Findings: Identified themes of existential suffering when living with palliative care needs arising from COPD are as follows: Liminality, Lamented Life, Loss of Personal Liberty, Life Meaning and Existential isolation. The absence of life meaning and purpose was of most importance to patients. Conclusion and Significance: This integrative review provides a synthesis of international evidence upon the presence of existential suffering. It is present and of significant impact within the daily lives of those living with palliative care needs arising from COPD. The absence of life meaning has the most significant impact, requiring further exploration of both its physical and psychological impact. Rediscovery of life meaning diminishes feelings of worthlessness and hopelessness in daily life and facilitates feelings of inner peace. For those with COPD living with such a relentless symptom burden, a positive existential situation is desirable.Keywords: palliative care, COPD, existential suffering, end of life care
Procedia PDF Downloads 1353105 Passively Q-Switched 914 nm Microchip Laser for LIDAR Systems
Authors: Marco Naegele, Klaus Stoppel, Thomas Dekorsy
Abstract:
Passively Q-switched microchip lasers enable the great potential for sophisticated LiDAR systems due to their compact overall system design, excellent beam quality, and scalable pulse energies. However, many near-infrared solid-state lasers show emitting wavelengths > 1000 nm, which are not compatible with state-of-the-art silicon detectors. Here we demonstrate a passively Q-switched microchip laser operating at 914 nm. The microchip laser consists of a 3 mm long Nd:YVO₄ crystal as a gain medium, while Cr⁴⁺:YAG with an initial transmission of 98% is used as a saturable absorber. Quasi-continuous pumping enables single pulse operation, and low duty cycles ensure low overall heat generation and power consumption. Thus, thermally induced instabilities are minimized, and operation without active cooling is possible while ambient temperature changes are compensated by adjustment of the pump laser current only. Single-emitter diode pumping at 808 nm leads to a compact overall system design and robust setup. Utilization of a microchip cavity approach ensures single-longitudinal mode operation with spectral bandwidths in the picometer regime and results in short laser pulses with pulse durations below 10 ns. Beam quality measurements reveal an almost diffraction-limited beam and enable conclusions concerning the thermal lens, which is essential to stabilize the plane-plane resonator. A 7% output coupler transmissivity is used to generate pulses with energies in the microjoule regime and peak powers of more than 600 W. Long-term pulse duration, pulse energy, central wavelength, and spectral bandwidth measurements emphasize the excellent system stability and facilitate the utilization of this laser in the context of a LiDAR system.Keywords: diode-pumping, LiDAR system, microchip laser, Nd:YVO4 laser, passively Q-switched
Procedia PDF Downloads 1293104 Ministers of Parliament and Their Official Web Sites; New Media Tool of Political Communication
Authors: Wijayanada Rupasinghe, A. H. Dinithi Jayasekara
Abstract:
In a modern democracy, new media can be used by governments to involve citizens in decision-making, and by civil society to engage people in specific issues. However new media can also be used to broaden political participation by helping citizens to communicate with their representatives and with each other. Arguably this political communication is most important during election campaigns when political parties and candidates seek to mobilize citizens and persuade them to vote for a given party or candidate. The new media must be used by Parliaments, Parliamentarians, governments and political parties as they are highly effective tools to involve and inform citizens in public policymaking and in the formation of governments. But all these groups must develop strategies to deal with a wide array of both positive and negative effects of these rapidly growing media.New media has begun to take precedent over other communication outlets in part because of its heightened accessibility and usability. Using personal website can empower the public in a way that is far faster, cheaper and more pervasive than other forms of communication. They encourage pluralism, reach young people more than other media and encourage greater participation, accountability and transparency. This research discusses the impact politicians’ personal websites has over their overall electability and likability and explores the integration of website is an essential campaign tactic on both the local and national level. This research examined the impact of having personal website have over the way constituents view politicians. This research examined how politicians can use their website in the most effective fashion and incorporate these new media outlets as essential campaign tools and tactics. A mixed-method approach using content analysis. Content analysis selected thirty websites in sri Lankan politicians. Research revealed that politician’s new media usage significantly influenced and enriched the experience an individual has with the public figure.Keywords: election campaign ministers, new media, parliament, politicians websites
Procedia PDF Downloads 3683103 Impact of Combined Heat and Power (CHP) Generation Technology on Distribution Network Development
Authors: Sreto Boljevic
Abstract:
In the absence of considerable investment in electricity generation, transmission and distribution network (DN) capacity, the demand for electrical energy will quickly strain the capacity of the existing electrical power network. With anticipated growth and proliferation of Electric vehicles (EVs) and Heat pump (HPs) identified the likelihood that the additional load from EV changing and the HPs operation will require capital investment in the DN. While an area-wide implementation of EVs and HPs will contribute to the decarbonization of the energy system, they represent new challenges for the existing low-voltage (LV) network. Distributed energy resources (DER), operating both as part of the DN and in the off-network mode, have been offered as a means to meet growing electricity demand while maintaining and ever-improving DN reliability, resiliency and power quality. DN planning has traditionally been done by forecasting future growth in demand and estimating peak load that the network should meet. However, new problems are arising. These problems are associated with a high degree of proliferation of EVs and HPs as load imposes on DN. In addition to that, the promotion of electricity generation from renewable energy sources (RES). High distributed generation (DG) penetration and a large increase in load proliferation at low-voltage DNs may have numerous impacts on DNs that create issues that include energy losses, voltage control, fault levels, reliability, resiliency and power quality. To mitigate negative impacts and at a same time enhance positive impacts regarding the new operational state of DN, CHP system integration can be seen as best action to postpone/reduce capital investment needed to facilitate promotion and maximize benefits of EVs, HPs and RES integration in low-voltage DN. The aim of this paper is to generate an algorithm by using an analytical approach. Algorithm implementation will provide a way for optimal placement of the CHP system in the DN in order to maximize the integration of RES and increase in proliferation of EVs and HPs.Keywords: combined heat & power (CHP), distribution networks, EVs, HPs, RES
Procedia PDF Downloads 2023102 Climate Change and Urban Flooding: The Need to Rethinking Urban Flood Management through Resilience
Authors: Suresh Hettiarachchi, Conrad Wasko, Ashish Sharma
Abstract:
The ever changing and expanding urban landscape increases the stress on urban systems to support and maintain safe and functional living spaces. Flooding presents one of the more serious threats to this safety, putting a larger number of people in harm’s way in congested urban settings. Climate change is adding to this stress by creating a dichotomy in the urban flood response. On the one hand, climate change is causing storms to intensify, resulting in more destructive, rarer floods, while on the other hand, longer dry periods are decreasing the severity of more frequent, less intense floods. This variability is creating a need to be more agile and innovative in how we design for and manage urban flooding. Here, we argue that to cope with this challenge climate change brings, we need to move towards urban flood management through resilience rather than flood prevention. We also argue that dealing with the larger variation in flood response to climate change means that we need to look at flooding from all aspects rather than the single-dimensional focus of flood depths and extents. In essence, we need to rethink how we manage flooding in the urban space. This change in our thought process and approach to flood management requires a practical way to assess and quantify resilience that is built into the urban landscape so that informed decision-making can support the required changes in planning and infrastructure design. Towards that end, we propose a Simple Urban Flood Resilience Index (SUFRI) based on a robust definition of resilience as a tool to assess flood resilience. The application of a simple resilience index such as the SUFRI can provide a practical tool that considers urban flood management in a multi-dimensional way and can present solutions that were not previously considered. When such an index is grounded on a clear and relevant definition of resilience, it can be a reliable and defensible way to assess and assist the process of adapting to the increasing challenges in urban flood management with climate change.Keywords: urban flood resilience, climate change, flood management, flood modelling
Procedia PDF Downloads 493101 Genetic Algorithm for In-Theatre Military Logistics Search-and-Delivery Path Planning
Authors: Jean Berger, Mohamed Barkaoui
Abstract:
Discrete search path planning in time-constrained uncertain environment relying upon imperfect sensors is known to be hard, and current problem-solving techniques proposed so far to compute near real-time efficient path plans are mainly bounded to provide a few move solutions. A new information-theoretic –based open-loop decision model explicitly incorporating false alarm sensor readings, to solve a single agent military logistics search-and-delivery path planning problem with anticipated feedback is presented. The decision model consists in minimizing expected entropy considering anticipated possible observation outcomes over a given time horizon. The model captures uncertainty associated with observation events for all possible scenarios. Entropy represents a measure of uncertainty about the searched target location. Feedback information resulting from possible sensor observations outcomes along the projected path plan is exploited to update anticipated unit target occupancy beliefs. For the first time, a compact belief update formulation is generalized to explicitly include false positive observation events that may occur during plan execution. A novel genetic algorithm is then proposed to efficiently solve search path planning, providing near-optimal solutions for practical realistic problem instances. Given the run-time performance of the algorithm, natural extension to a closed-loop environment to progressively integrate real visit outcomes on a rolling time horizon can be easily envisioned. Computational results show the value of the approach in comparison to alternate heuristics.Keywords: search path planning, false alarm, search-and-delivery, entropy, genetic algorithm
Procedia PDF Downloads 3603100 Rationale of Eye Pupillary Diameter for the UV Protection for Sunglasses
Authors: Liliane Ventura, Mauro Masili
Abstract:
Ultraviolet (UV) protection is critical for sunglasses, and mydriasis, as well as miosis, are relevant parameters to consider. The literature reports that for sunglasses, ultraviolet protection is critical because sunglasses can cause the opposite effect if the lenses do not provide adequate UV protection due to the greater dilation of the pupil when wearing sunglasses. However, the scientific literature does not properly quantify to support this rationale. The reasoning may be misleading by ignoring not only the inherent absorption of UV by the sunglass lens materials but also by ignoring the absorption of the anterior structures of the eye, i.e., the cornea and aqueous humor. Therefore, we estimate the pupil diameter and calculate the solar ultraviolet influx through the pupil of the human eye for two situations of an individual wearing and not wearing sunglasses. We quantify the dilation of the pupil as a function of the luminance of the surrounding. Therefore, we calculate the influx of solar UV through the pupil of the eye for two situations for an individual wearing sunglass and for the eyes free of shade. A typical boundary condition for the calculation is an individual in an upright position wearing sunglasses, staring at the horizon as if the sun is in the zenith. The calculation was done for the latitude of the geographic center of the state of São Paulo (-22º04'11.8'' S) from sunrise to sunset. A model from the literature is used for determining the sky luminance. The initial approach is to obtain pupil diameter as a function of luminance. Therefore, as a preliminary result, we calculate the pupil diameter as a function of the time of day, as the sun moves, for a particular day of the year. The working range for luminance is daylight (10⁻⁴ – 10⁵ cd/m²). We are able to show how the pupil adjusts to brightness change (~2 - ~7.8 mm). At noon, with the sun higher, the direct incidence of light on the pupil is lower if compared to mid-morning or mid-afternoon, when the sun strikes more directly into the eye. Thus, the pupil is larger at midday. As expected, the two situations have opposite behaviors since higher luminance implies a smaller pupil. With these results, we can progress in the short term to obtain the transmittance spectra of sunglasses samples and quantify how light attenuation provided by the spectacles affects pupil diameter.Keywords: sunglasses, UV protection, pupil diameter, solar irradiance, luminance
Procedia PDF Downloads 813099 Non-Cognitive Skills Associated with Learning in a Serious Gaming Environment: A Pretest-Posttest Experimental Design
Authors: Tanja Kreitenweis
Abstract:
Lifelong learning is increasingly seen as essential for coping with the rapidly changing work environment. To this end, serious games can provide convenient and straightforward access to complex knowledge for all age groups. However, learning achievements depend largely on a learner’s non-cognitive skill disposition (e.g., motivation, self-belief, playfulness, and openness). With the aim of combining the fields of serious games and non-cognitive skills, this research focuses in particular on the use of a business simulation, which conveys change management insights. Business simulations are a subset of serious games and are perceived as a non-traditional learning method. The presented objectives of this work are versatile: (1) developing a scale, which measures learners’ knowledge and skills level before and after a business simulation was played, (2) investigating the influence of non-cognitive skills on learning in this business simulation environment and (3) exploring the moderating role of team preference in this type of learning setting. First, expert interviews have been conducted to develop an appropriate measure for learners’ skills and knowledge assessment. A pretest-posttest experimental design with German management students was implemented to approach the remaining objectives. By using the newly developed, reliable measure, it was found that students’ skills and knowledge state were higher after the simulation had been played, compared to before. A hierarchical regression analysis revealed two positive predictors for this outcome: motivation and self-esteem. Unexpectedly, playfulness had a negative impact. Team preference strengthened the link between grit and playfulness, respectively, and learners’ skills and knowledge state after completing the business simulation. Overall, the data underlined the potential of business simulations to improve learners’ skills and knowledge state. In addition, motivational factors were found as predictors for benefitting most from the applied business simulation. Recommendations are provided for how pedagogues can use these findings.Keywords: business simulations, change management, (experiential) learning, non-cognitive skills, serious games
Procedia PDF Downloads 1083098 Examining the Development of Complexity, Accuracy and Fluency in L2 Learners' Writing after L2 Instruction
Authors: Khaled Barkaoui
Abstract:
Research on second-language (L2) learning tends to focus on comparing students with different levels of proficiency at one point in time. However, to understand L2 development, we need more longitudinal research. In this study, we adopt a longitudinal approach to examine changes in three indicators of L2 ability, complexity, accuracy, and fluency (CAF), as reflected in the writing of L2 learners when writing on different tasks before and after a period L2 instruction. Each of 85 Chinese learners of English at three levels of English language proficiency responded to two writing tasks (independent and integrated) before and after nine months of English-language study in China. Each essay (N= 276) was analyzed in terms of numerous CAF indices using both computer coding and human rating: number of words written, number of errors per 100 words, ratings of error severity, global syntactic complexity (MLS), complexity by coordination (T/S), complexity by subordination (C/T), clausal complexity (MLC), phrasal complexity (NP density), syntactic variety, lexical density, lexical variation, lexical sophistication, and lexical bundles. Results were then compared statistically across tasks, L2 proficiency levels, and time. Overall, task type had significant effects on fluency and some syntactic complexity indices (complexity by coordination, structural variety, clausal complexity, phrase complexity) and lexical density, sophistication, and bundles, but not accuracy. L2 proficiency had significant effects on fluency, accuracy, and lexical variation, but not syntactic complexity. Finally, fluency, frequency of errors, but not accuracy ratings, syntactic complexity indices (clausal complexity, global complexity, complexity by subordination, phrase complexity, structural variety) and lexical complexity (lexical density, variation, and sophistication) exhibited significant changes after instruction, particularly for the independent task. We discuss the findings and their implications for assessment, instruction, and research on CAF in the context of L2 writing.Keywords: second language writing, Fluency, accuracy, complexity, longitudinal
Procedia PDF Downloads 1533097 One-Class Classification Approach Using Fukunaga-Koontz Transform and Selective Multiple Kernel Learning
Authors: Abdullah Bal
Abstract:
This paper presents a one-class classification (OCC) technique based on Fukunaga-Koontz Transform (FKT) for binary classification problems. The FKT is originally a powerful tool to feature selection and ordering for two-class problems. To utilize the standard FKT for data domain description problem (i.e., one-class classification), in this paper, a set of non-class samples which exist outside of positive class (target class) describing boundary formed with limited training data has been constructed synthetically. The tunnel-like decision boundary around upper and lower border of target class samples has been designed using statistical properties of feature vectors belonging to the training data. To capture higher order of statistics of data and increase discrimination ability, the proposed method, termed one-class FKT (OC-FKT), has been extended to its nonlinear version via kernel machines and referred as OC-KFKT for short. Multiple kernel learning (MKL) is a favorable family of machine learning such that tries to find an optimal combination of a set of sub-kernels to achieve a better result. However, the discriminative ability of some of the base kernels may be low and the OC-KFKT designed by this type of kernels leads to unsatisfactory classification performance. To address this problem, the quality of sub-kernels should be evaluated, and the weak kernels must be discarded before the final decision making process. MKL/OC-FKT and selective MKL/OC-FKT frameworks have been designed stimulated by ensemble learning (EL) to weight and then select the sub-classifiers using the discriminability and diversities measured by eigenvalue ratios. The eigenvalue ratios have been assessed based on their regions on the FKT subspaces. The comparative experiments, performed on various low and high dimensional data, against state-of-the-art algorithms confirm the effectiveness of our techniques, especially in case of small sample size (SSS) conditions.Keywords: ensemble methods, fukunaga-koontz transform, kernel-based methods, multiple kernel learning, one-class classification
Procedia PDF Downloads 213096 HPSEC Application as a New Indicator of Nitrification Occurrence in Water Distribution Systems
Authors: Sina Moradi, Sanly Liu, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Soha Habibi, Rose Amal
Abstract:
In recent years, chloramine has been widely used for both primary and secondary disinfection. However, a major concern with the use of chloramine as a secondary disinfectant is the decay of chloramine and nitrification occurrence. The management of chloramine decay and the prevention of nitrification are critical for water utilities managing chloraminated drinking water distribution systems. The detection and monitoring of nitrification episodes is usually carried out through measuring certain water quality parameters, which are commonly referred to as indicators of nitrification. The approach taken in this study was to collect water samples from different sites throughout a drinking water distribution systems, Tailem Bend – Keith (TBK) in South Australia, and analyse the samples by high performance size exclusion chromatography (HPSEC). We investigated potential association between the water qualities from HPSEC analysis with chloramine decay and/or nitrification occurrence. MATLAB 8.4 was used for data processing of HPSEC data and chloramine decay. An increase in the absorbance signal of HPSEC profiles at λ=230 nm between apparent molecular weights of 200 to 1000 Da was observed at sampling sites that experienced rapid chloramine decay and nitrification while its absorbance signal of HPSEC profiles at λ=254 nm decreased. An increase in absorbance at λ=230 nm and AMW < 500 Da was detected for Raukkan CT (R.C.T), a location that experienced nitrification and had significantly lower chloramine residual (<0.1 mg/L). This increase in absorbance was not detected in other sites that did not experience nitrification. Moreover, the UV absorbance at 254 nm of the HPSEC spectra was lower at R.C.T. than other sites. In this study, a chloramine residual index (C.R.I) was introduced as a new indicator of chloramine decay and nitrification occurrence, and is defined based on the ratio of area underneath the HPSEC spectra at two different wavelengths of 230 and 254 nm. The C.R.I index is able to indicate DS sites that experienced nitrification and rapid chloramine loss. This index could be useful for water treatment and distribution system managers to know if nitrification is occurring at a specific location in water distribution systems.Keywords: nitrification, HPSEC, chloramine decay, chloramine residual index
Procedia PDF Downloads 2983095 The Structural Behavior of Fiber Reinforced Lightweight Concrete Beams: An Analytical Approach
Authors: Jubee Varghese, Pouria Hafiz
Abstract:
Increased use of lightweight concrete in the construction industry is mainly due to its reduction in the weight of the structural elements, which in turn reduces the cost of production, transportation, and the overall project cost. However, the structural application of these lightweight concrete structures is limited due to its reduced density. Hence, further investigations are in progress to study the effect of fiber inclusion in improving the mechanical properties of lightweight concrete. Incorporating structural steel fibers, in general, enhances the performance of concrete and increases its durability by minimizing its potential to cracking and providing crack arresting mechanism. In this research, Geometric and Materially Non-linear Analysis (GMNA) was conducted for Finite Element Modelling using a software known as ABAQUS, to investigate the structural behavior of lightweight concrete with and without the addition of steel fibers and shear reinforcement. 21 finite element models of beams were created to study the effect of steel fibers based on three main parameters; fiber volume fraction (Vf = 0, 0.5 and 0.75%), shear span to depth ratio (a/d of 2, 3 and 4) and ratio of area of shear stirrups to spacing (As/s of 0.7, 1 and 1.6). The models created were validated with the previous experiment conducted by H.K. Kang et al. in 2011. It was seen that the lightweight fiber reinforcement can replace the use of fiber reinforced normal weight concrete as structural elements. The effect of an increase in steel fiber volume fraction is dominant for beams with higher shear span to depth ratio than for lower ratios. The effect of stirrups in the presence of fibers was very negligible; however; it provided extra confinement to the cracks by reducing the crack propagation and extra shear resistance than when compared to beams with no stirrups.Keywords: ABAQUS, beams, fiber-reinforced concrete, finite element, light weight, shear span-depth ratio, steel fibers, steel-fiber volume fraction
Procedia PDF Downloads 1073094 Optimization of Traffic Agent Allocation for Minimizing Bus Rapid Transit Cost on Simplified Jakarta Network
Authors: Gloria Patricia Manurung
Abstract:
Jakarta Bus Rapid Transit (BRT) system which was established in 2009 to reduce private vehicle usage and ease the rush hour gridlock throughout the Jakarta Greater area, has failed to achieve its purpose. With gradually increasing the number of private vehicles ownership and reduced road space by the BRT lane construction, private vehicle users intuitively invade the exclusive lane of BRT, creating local traffic along the BRT network. Invaded BRT lanes costs become the same with the road network, making BRT which is supposed to be the main public transportation in the city becoming unreliable. Efforts to guard critical lanes with preventing the invasion by allocating traffic agents at several intersections have been expended, lead to the improving congestion level along the lane. Given a set of number of traffic agents, this study uses an analytical approach to finding the best deployment strategy of traffic agent on a simplified Jakarta road network in minimizing the BRT link cost which is expected to lead to the improvement of BRT system time reliability. User-equilibrium model of traffic assignment is used to reproduce the origin-destination demand flow on the network and the optimum solution conventionally can be obtained with brute force algorithm. This method’s main constraint is that traffic assignment simulation time escalates exponentially with the increase of set of agent’s number and network size. Our proposed metaheuristic and heuristic algorithms perform linear simulation time increase and result in minimized BRT cost approaching to brute force algorithm optimization. Further analysis of the overall network link cost should be performed to see the impact of traffic agent deployment to the network system.Keywords: traffic assignment, user equilibrium, greedy algorithm, optimization
Procedia PDF Downloads 2293093 Circle Work as a Relational Praxis to Facilitate Collaborative Learning within Higher Education: A Decolonial Pedagogical Framework for Teaching and Learning in the Virtual Classroom
Authors: Jennifer Nutton, Gayle Ployer, Ky Scott, Jenny Morgan
Abstract:
Working in a circle within higher education creates a decolonial space of mutual respect, responsibility, and reciprocity that facilitates collaborative learning and deep connections among learners and instructors. This approach is beyond simply facilitating a group in a circle but opens the door to creating a sacred space connecting each member to the land, to the Indigenous peoples who have taken care of the lands since time immemorial, to one another, and to one’s own positionality. These deep connections not only center human knowledges and relationships but also acknowledges responsibilities to land. Working in a circle as a relational pedagogical praxis also disrupts institutional power dynamics by creating a space of collaborative learning and deep connections in the classroom. Inherent within circle work is to facilitate connections not just academically but emotionally, physically, culturally, and spiritually. Recent literature supports the use of online talking circles, finding that it can offer a more relational and experiential learning environment, which is often absent in the virtual world and has been made more evident and necessary since the pandemic. These deeper experiences of learning and connection, rooted in both knowledge and the land, can then be shared with openness and vulnerability with one another, facilitating growth and change. This process of beginning with the land is critical to ensure we have the grounding to obstruct the ongoing realities of colonialism. The authors, who identify as both Indigenous and non-Indigenous, as both educators and learners, reflect on their teaching and learning experiences in circle. They share a relational pedagogical praxis framework that has been successful in educating future social workers, environmental activists, and leaders in social and human services, health, legal and political fields.Keywords: circle work, relational pedagogies, decolonization, distance education
Procedia PDF Downloads 763092 Radio Regulation Development and Radio Spectrum Analysis of Earth Station in Motion Service
Authors: Fei Peng, Jun Yuan, Chen Fan, Fan Jiang, Qian Sun, Yudi Liu
Abstract:
Although Earth Station in Motion (ESIM) services are widely used and there is a huge market demand around the world, International Telecommunication Union (ITU) does not have unified conclusion for the use of ESIM yet. ESIM are Mobile Satellite Services (MSS) due to its mobile-based attributes, while multiple administrations want to use ESIM in Fixed Satellite Service (FSS). However, Radio Regulations (RR) have strict distinction between MSS and FSS. In this case, ITU has been very controversial because this kind of application will violate the RR Article and the conflict will bring risks to the global deployment. Thus, this paper illustrates the development of rules, regulations, standards concerning ESIM and the radio spectrum usage of ESIM in different regions around the world. Firstly, the basic rules, standard and definition of ITU’s Radiocommunication Sector (ITU-R) is introduced. Secondly, the World Radiocommunication Conference (WRC) agenda item on radio spectrum allocation for ESIM, e.g. in C/Ku/Ka band, is introduced and multi-view on the radio spectrum allocation is elaborated, especially on 19.7-20.2 GHz & 29.5-30.0 GHz. Then, some ITU-R Recommendations and Reports are analyzed on the specific technique to enable these ESIM to communicate with Geostationary Earth Orbit Satellite (GSO) space stations in the FSS without causing interference at levels in excess of that caused by conventional FSS earth stations. Meanwhile, the opposite opinion on not allocating EISM service in FSS frequency band is also elaborated. Finally, based on the ESIM’s future application, the ITU-R standards development trend is forecasted. In conclusion, using radio spectrum resource in an equitable, rational and efficient manner is the basic guideline of ITU. Although it is not a good approach to obstruct the revise of RR when there is a large demand for radio spectrum resource in satellite industry, still the propulsion and global demand of the whole industry may face difficulties on the unclear application in modify rules of RR.Keywords: earth station in motion, ITU standards, radio regulations, radio spectrum, satellite communication
Procedia PDF Downloads 2883091 Integrating Human Rights into Countering Violent Extremism: A Comparative Analysis of Women Without Borders and Hedayah Initiatives
Authors: Portia Muehlbauer
Abstract:
This paper examines the evolving landscape of preventing and countering violent extremism (PCVE) by delving into the growing importance of integrating human rights principles into violence prevention strategies on the local, community level. This study sheds light on the underlying theoretical frameworks of violent extremism and the influence of gender while investigating the intersection between human rights preservation and violent extremism prevention. To gain practical insight, the research focuses on two prominent international non-governmental organizations, Women without Borders (WwB) and Hedayah, and their distinct PCVE initiatives. WwB adopts a gender-sensitive approach, implementing parental education programs that empower mothers in at-risk communities to prevent the spread of violent extremism. In contrast, Hedayah takes an indirect route, employing capacity building programs that enhance the capabilities of educators, social workers, and psychologists in early intervention, rehabilitation and reintegration efforts. Qualitative data for this comparative analysis was collected through an extensive four-month internship at WwB during the fall of 2020, a three-month internship at Hedayah in the spring of 2021, a thought-provoking semi-structured interview with the executive director of WwB, personal field notes, and a comprehensive discourse analysis of the prevailing literature on human rights considerations in PCVE practices. This study examines the merits and challenges of integrating human rights into PCVE programming through the lens of both organizations, WwB and Hedayah. The findings of this study will inform policymakers, practitioners, and researchers on the intricate relationship between human rights protection and effective PCVE strategies.Keywords: preventing and countering violent extremism, human rights, counterterrorism, peacebuilding, capacity building programs, gender studies
Procedia PDF Downloads 623090 A Numerical Model for Simulation of Blood Flow in Vascular Networks
Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia
Abstract:
An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system
Procedia PDF Downloads 2723089 Discovering Social Entrepreneurship: A Qualitative Study on Stimulants and Obstacles for Social Entrepreneurs in the Hague
Authors: Loes Nijskens
Abstract:
The city of The Hague is coping with several social issues: high unemployment rates, segregation and environmental pollution. The amount of social enterprises in The Hague that want to tackle these issues is increasing, but no clear image exists of the stimulants and obstacles social entrepreneurs encounter. In this qualitative study 20 starting and established social entrepreneurs, investors and stimulators of social entrepreneurship have been interviewed. The findings indicate that the majority of entrepreneurs situated in The Hague focuses on creating jobs (the so called social nurturers) and diminishing food waste. Moreover, the study found smaller groups of social connectors, (who focus on stimulating the social cohesion in the city) and social traders (who create a market for products from developing countries). For the social nurturers, working together with local government to find people with a distance to the labour market is a challenge. The entrepreneurs are missing a governance approach within the local government, wherein space is provided to develop suitable legislation and projects in cooperation with several stakeholders in order to diminish social problems. All entrepreneurs in the sample face(d) the challenge of having a clear purpose of their business in the beginning. Starting social entrepreneurs tend to be idealistic without having defined a business model. Without a defined business model it is difficult to find proper funding for their business. The more advanced enterprises cope with the challenge of measuring social impact. The larger they grow, the more they have to ‘defend’ themselves towards the local government and their customers, of mainly being social. Hence, the more experienced social nurturers still find it difficult to work together with the local government. They tend to settle their business in other municipalities, where they find more effective public-private partnerships. Al this said, the eco-system for social enterprises in The Hague is on the rise. To stimulate the amount and growth of social enterprises the cooperation between entrepreneurs and local government, the developing of social business models and measuring of impact needs more attention.Keywords: obstacles, social enterprises, stimulants, the Hague
Procedia PDF Downloads 2183088 Dairy Value Chain: Assessing the Inter Linkage of Dairy Farm and Small-Scale Dairy Processing in Tigray: Case Study of Mekelle City
Authors: Weldeabrha Kiros Kidanemaryam, DepaTesfay Kelali Gidey, Yikaalo Welu Kidanemariam
Abstract:
Dairy services are considered as sources of income, employment, nutrition and health for smallholder rural and urban farmers. The main objective of this study is to assess the interlinkage of dairy farms and small-scale dairy processing in Mekelle, Tigray. To achieve the stated objective, a descriptive research approach was employed where data was collected from 45 dairy farmers and 40 small-scale processors and analyzed by calculating the mean values and percentages. Findings show that the dairy business in the study area is characterized by a shortage of feed and water for the farm. The dairy farm is dominated by breeds of hybrid type, followed by the so called ‘begait’. Though the farms have access to medication and vaccination for the cattle, they fell short of hygiene practices, reliable shade for the cattle and separate space for the claves. The value chain at the milk production stage is characterized by a low production rate, selling raw milk without adding value and a very meager traditional processing practice. Furthermore, small-scale milk processors are characterized by collecting milk from farmers and producing cheese, butter, ghee and sour milk. They do not engage in modern milk processing like pasteurized milk, yogurt and table butter. Most small-scale milk processors are engaged in traditional production systems. Additionally, the milk consumption and marketing part of the chain is dominated by the informal market (channel), where market problems, lack of skill and technology, shortage of loans and weak policy support are being faced as the main challenges. Based on the findings, recommendations and future research areas are forwarded.Keywords: value-chain, dairy, milk production, milk processing
Procedia PDF Downloads 32