Search results for: Jean Hugé
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1532

Search results for: Jean Hugé

242 An Experimental Study on the Coupled Heat Source and Heat Sink Effects on Solid Rockets

Authors: Vinayak Malhotra, Samanyu Raina, Ajinkya Vajurkar

Abstract:

Enhancing the rocket efficiency by controlling the external factors in solid rockets motors has been an active area of research for most of the terrestrial and extra-terrestrial system operations. Appreciable work has been done, but the complexity of the problem has prevented thorough understanding due to heterogenous heat and mass transfer. On record, severe issues have surfaced amounting to irreplaceable loss of mankind, instruments, facilities, and huge amount of money being invested every year. The coupled effect of an external heat source and external heat sink is an aspect yet to be articulated in combustion. Better understanding of this coupled phenomenon will induce higher safety standards, efficient missions, reduced hazard risks, with better designing, validation, and testing. The experiment will help in understanding the coupled effect of an external heat sink and heat source on the burning process, contributing in better combustion and fire safety, which are very important for efficient and safer rocket flights and space missions. Safety is the most prevalent issue in rockets, which assisted by poor combustion efficiency, emphasizes research efforts to evolve superior rockets. This signifies real, engineering, scientific, practical, systems and applications. One potential application is Solid Rocket Motors (S.R.M). The study may help in: (i) Understanding the effect on efficiency of core engines due to the primary boosters if considered as source, (ii) Choosing suitable heat sink materials for space missions so as to vary the efficiency of the solid rocket depending on the mission, (iii) Giving an idea about how the preheating of the successive stage due to previous stage acting as a source may affect the mission. The present work governs the temperature (resultant) and thus the heat transfer which is expected to be non-linear because of heterogeneous heat and mass transfer. The study will deepen the understanding of controlled inter-energy conversions and the coupled effect of external source/sink(s) surrounding the burning fuel eventually leading to better combustion thus, better propulsion. The work is motivated by the need to have enhanced fire safety and better rocket efficiency. The specific objective of the work is to understand the coupled effect of external heat source and sink on propellant burning and to investigate the role of key controlling parameters. Results as of now indicate that there exists a singularity in the coupled effect. The dominance of the external heat sink and heat source decides the relative rocket flight in Solid Rocket Motors (S.R.M).

Keywords: coupled effect, heat transfer, sink, solid rocket motors, source

Procedia PDF Downloads 197
241 Sovereign Debt Restructuring: A Study of the Inadequacies of the Contractual Approach

Authors: Salamah Ansari

Abstract:

In absence of a comprehensive international legal regime for sovereign debt restructuring, majority of the complications arising from sovereign debt restructuring are frequently left to the uncertain market forces. The resort to market forces for sovereign debt restructuring has led to a phenomenal increase in litigations targeting assets of defaulting sovereign nations, internationally across jurisdictions with the first major wave of lawsuits against sovereigns in the 1980s with the Latin American crisis. Recent experiences substantiate that majority of obstacles faced during sovereign debt restructuring process are caused by inefficient creditor coordination and collective action problems. Collective action problems manifest as grab race, rush to exits, holdouts, the free rider problem and the rush to the courthouse. On defaulting, for a nation to successfully restructure its debt, all the creditors involved must accept some reduction in the value of their claims. As a single holdout creditor has the potential to undermine the restructuring process, hold-out creditors are snowballing with the increasing probability of earning high returns through litigations. This necessitates a mechanism to avoid holdout litigations and reinforce collective action on the part of the creditor. This can be done either through a statutory reform or through market-based contractual approach. In absence of an international sovereign bankruptcy regime, the impetus is mostly on inclusion of collective action clauses in debt contracts. The preference to contractual mechanisms vis- a vis a statutory approach can be explained with numerous reasons, but that's only part of the puzzle in trying to understand the economics of the underlying system. The contractual approach proposals advocate the inclusion of certain clauses in the debt contract for an orderly debt restructuring. These include clauses such as majority voting clauses, sharing clauses, non- acceleration clauses, initiation clauses, aggregation clauses, temporary stay on litigation clauses, priority financing clauses, and complete revelation of relevant information. However, voluntary market based contractual approach to debt workouts has its own complexities. It is a herculean task to enshrine clauses in debt contracts that are detailed enough to create an orderly debt restructuring mechanism while remaining attractive enough for creditors. Introduction of collective action clauses into debt contracts can reduce the barriers in efficient debt restructuring and also have the potential to improve the terms on which sovereigns are able to borrow. However, it should be borne in mind that such clauses are not a panacea to the huge institutional inadequacy that persists and may lead to worse restructuring outcomes.

Keywords: sovereign debt restructuring, collective action clauses, hold out creditors, litigations

Procedia PDF Downloads 132
240 The Neoliberal Social-Economic Development and Values in the Baltic States

Authors: Daiva Skuciene

Abstract:

The Baltic States turned to free market and capitalism after independency. The new socioeconomic system, democracy and priorities about the welfare of citizens formed. The researches show that Baltic states choose the neoliberal development. Related to this neoliberal path, a few questions arouse: how do people evaluate the results of such policy and socioeconomic development? What are their priorities? And what are the values of the Baltic societies that support neoliberal policy? The purpose of this research – to analyze the socioeconomic context and the priorities and the values of the Baltics societies related to neoliberal regime. The main objectives are: firstly, to analyze the neoliberal socioeconomic features and results; secondly, to analyze people opinions and priorities about the results of neoliberal development; thirdly, to analyze the values of the Baltic societies related to the neoliberal policy. For the implementation of the purpose and objectives, the comparative analyses among European countries are used. The neoliberal regime was defined through two indicators: the taxes on capital income and expenditures on social protection. The socioeconomic outcomes of neoliberal welfare regime are defined through the Gini inequality and at risk of the poverty rate. For this analysis, the data of 2002-2013 of Eurostat were used. For the analyses of opinion about inequality and preferences on society, people want to live in, the preferences for distribution between capital and wages in enterprise data of Eurobarometer in 2010-2014 and the data of representative survey in the Baltic States in 2016 were used. The justice variable was selected as a variable reflecting the evaluation of socioeconomic context and analyzed using data of Eurobarometer 2006-2015. For the analyses of values were selected: solidarity, equality, and individual responsibility. The solidarity, equality was analyzed using data of Eurobarometer 2006-2015. The value “individual responsibility” was examined by opinions about reasons of inequality and poverty. The survey of population in the Baltic States in 2016 and data of Eurobarometer were used for this aim. The data are ranged in descending order for understanding the position of opinion of people in the Baltic States among European countries. The dynamics of indicators is also provided to examine stability of values. The main findings of the research are that people in the Baltics are dissatisfied with the results of the neoliberal socioeconomic development, they have priorities for equality and justice, but they have internalized the main neoliberal narrative- individual responsibility. The impact of socioeconomic context on values is huge, resulting in a change in quite stable opinions and values during the period of the financial crisis.

Keywords: neoliberal, inequality and poverty, solidarity, individual responsibility

Procedia PDF Downloads 231
239 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 262
238 Suspended Sediment Concentration and Water Quality Monitoring Along Aswan High Dam Reservoir Using Remote Sensing

Authors: M. Aboalazayem, Essam A. Gouda, Ahmed M. Moussa, Amr E. Flifl

Abstract:

Field data collecting is considered one of the most difficult work due to the difficulty of accessing large zones such as large lakes. Also, it is well known that the cost of obtaining field data is very expensive. Remotely monitoring of lake water quality (WQ) provides an economically feasible approach comparing to field data collection. Researchers have shown that lake WQ can be properly monitored via Remote sensing (RS) analyses. Using satellite images as a method of WQ detection provides a realistic technique to measure quality parameters across huge areas. Landsat (LS) data provides full free access to often occurring and repeating satellite photos. This enables researchers to undertake large-scale temporal comparisons of parameters related to lake WQ. Satellite measurements have been extensively utilized to develop algorithms for predicting critical water quality parameters (WQPs). The goal of this paper is to use RS to derive WQ indicators in Aswan High Dam Reservoir (AHDR), which is considered Egypt's primary and strategic reservoir of freshwater. This study focuses on using Landsat8 (L-8) band surface reflectance (SR) observations to predict water-quality characteristics which are limited to Turbidity (TUR), total suspended solids (TSS), and chlorophyll-a (Chl-a). ArcGIS pro is used to retrieve L-8 SR data for the study region. Multiple linear regression analysis was used to derive new correlations between observed optical water-quality indicators in April and L-8 SR which were atmospherically corrected by values of various bands, band ratios, and or combinations. Field measurements taken in the month of May were used to validate WQP obtained from SR data of L-8 Operational Land Imager (OLI) satellite. The findings demonstrate a strong correlation between indicators of WQ and L-8 .For TUR, the best validation correlation with OLI SR bands blue, green, and red, were derived with high values of Coefficient of correlation (R2) and Root Mean Square Error (RMSE) equal 0.96 and 3.1 NTU, respectively. For TSS, Two equations were strongly correlated and verified with band ratios and combinations. A logarithm of the ratio of blue and green SR was determined to be the best performing model with values of R2 and RMSE equal to 0.9861 and 1.84 mg/l, respectively. For Chl-a, eight methods were presented for calculating its value within the study area. A mix of blue, red, shortwave infrared 1(SWR1) and panchromatic SR yielded the greatest validation results with values of R2 and RMSE equal 0.98 and 1.4 mg/l, respectively.

Keywords: remote sensing, landsat 8, nasser lake, water quality

Procedia PDF Downloads 75
237 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.

Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer

Procedia PDF Downloads 93
236 Overcoming Obstacles in UHTHigh-protein Whey Beverages by Microparticulation Process: Scientific and Technological Aspects

Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh, Seyed Jalal Razavi Zahedkolaei

Abstract:

Herein, a shelf stable (no refrigeration required) UHT processed, aseptically packaged whey protein drink was formulated by using a new strategy in microparticulate process. Applying thermal and two-dimensional mechanical treatments simultaneously, a modified protein (MWPC-80) was produced. Then the physical, thermal and thermodynamic properties of MWPC-80 were assessed using particle size analysis, dynamic temperature sweep (DTS), and differential scanning calorimetric (DSC) tests. Finally, using MWPC-80, a new RTD beverage was formulated, and shelf stability was assessed for three months at ambient temperature (25 °C). Non-isothermal dynamic temperature sweep was performed, and the results were analyzed by a combination of classic rate equation, Arrhenius equation, and time-temperature relationship. Generally, results showed that temperature dependency of the modified sample was significantly (Pvalue<0.05) less than the control one contained WPC-80. The changes in elastic modulus of the MWPC did not show any critical point at all the processed stages, whereas, the control sample showed two critical points during heating (82.5 °C) and cooling (71.10 °C) stages. Thermal properties of samples (WPC-80 & MWPC-80) were assessed using DSC with 4 °C /min heating speed at 20-90 °C heating range. Results did not show any thermal peak in MWPC DSC curve, which suggested high thermal resistance. On the other hands, WPC-80 sample showed a significant thermal peak with thermodynamic properties of ∆G:942.52 Kj/mol ∆H:857.04 Kj/mole and ∆S:-1.22Kj/mole°K. Dynamic light scattering was performed and results showed 0.7 µm and 15 nm average particle size for MWPC-80 and WPC-80 samples, respectively. Moreover, particle size distribution of MWPC-80 and WPC-80 were Gaussian-Lutresian and normal, respectively. After verification of microparticulation process by DTS, PSD and DSC analyses, a 10% why protein beverage (10% w/w/ MWPC-80, 0.6% w/w vanilla flavoring agent, 0.1% masking flavor, 0.05% stevia natural sweetener and 0.25% citrate buffer) was formulated and UHT treatment was performed at 137 °C and 4 s. Shelf life study did not show any jellification or precipitation of MWPC-80 contained beverage during three months storage at ambient temperature, whereas, WPC-80 contained beverage showed significant precipitation and jellification after thermal processing, even at 3% w/w concentration. Consumer knowledge on nutritional advantages of whey protein increased the request for using this protein in different food systems especially RTD beverages. These results could make a huge difference in this industry.

Keywords: high protein whey beverage, micropartiqulation, two-dimentional mechanical treatments, thermodynamic properties

Procedia PDF Downloads 45
235 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 33
234 Thermal Comfort and Outdoor Urban Spaces in the Hot Dry City of Damascus, Syria

Authors: Lujain Khraiba

Abstract:

Recently, there is a broad recognition that micro-climate conditions contribute to the quality of life in urban spaces outdoors, both from economical and social viewpoints. The consideration of urban micro-climate and outdoor thermal comfort in urban design and planning processes has become one of the important aspects in current related studies. However, these aspects are so far not considered in urban planning regulations in practice and these regulations are often poorly adapted to the local climate and culture. Therefore, there is a huge need to adapt the existing planning regulations to the local climate especially in cities that have extremely hot weather conditions. The overall aim of this study is to point out the complexity of the relationship between urban planning regulations, urban design, micro-climate and outdoor thermal comfort in the hot dry city of Damascus, Syria. The main aim is to investigate the temporal and spatial effects of micro-climate on urban surface temperatures and outdoor thermal comfort in different urban design patterns as a result of urban planning regulations during the extreme summer conditions. In addition, studying different alternatives of how to mitigate the surface temperature and thermal stress is also a part of the aim. The novelty of this study is to highlight the combined effect of urban surface materials and vegetation to develop the thermal environment. This study is based on micro-climate simulations using ENVI-met 3.1. The input data is calibrated according to a micro-climate fieldwork that has been conducted in different urban zones in Damascus. Different urban forms and geometries including the old and the modern parts of Damascus are thermally evaluated. The Physiological Equivalent Temperature (PET) index is used as an indicator for outdoor thermal comfort analysis. The study highlights the shortcomings of existing planning regulations in terms of solar protection especially at street levels. The results show that the surface temperatures in Old Damascus are lower than in the modern part. This is basically due to the difference in urban geometries that prevent the solar radiation in Old Damascus to reach the ground and heat up the surface whereas in modern Damascus, the streets are prescribed as wide spaces with high values of Sky View Factor (SVF is about 0.7). Moreover, the canyons in the old part are paved in cobblestones whereas the asphalt is the main material used in the streets of modern Damascus. Furthermore, Old Damascus is less stressful than the modern part (the difference in PET index is about 10 °C). The thermal situation is enhanced when different vegetation are considered (an improvement of 13 °C in the surface temperature is recorded in modern Damascus). The study recommends considering a detailed landscape code at street levels to be integrated in urban regulations of Damascus in order to achieve a better urban development in harmony with micro-climate and comfort. Such strategy will be very useful to decrease the urban warming in the city.

Keywords: micro-climate, outdoor thermal comfort, urban planning regulations, urban spaces

Procedia PDF Downloads 459
233 Loss of Green Space in Urban Metropolitan and Its Alarming Impacts on Teenagers' Life: A Case Study on Dhaka

Authors: Nuzhat Sharmin

Abstract:

Human being is the most integral part of the nature and responsible for maintaining ecological balance both in rural and urban areas. But unfortunately, we are not doing our job with a holistic approach. The rapid growth of urbanization is making human life more isolated from greenery. Nowadays modern urban living involves sensory deprivation and overloaded stress. In many cities and towns of the world are expanding unabated in the name of urbanization and industrialization and in fact becoming jungles of concrete. Dhaka is one of the examples of such cities where open and green spaces are decreasing because of accommodating the overflow of population. This review paper has been prepared based on interviewing 30 teenagers, both male and female in Dhaka city. There were 12 open-ended questions in the questionnaire. For the literature review information had been gathered from scholarly papers published in various peer-reviewed journals. Some information was collected from the newspapers and some from fellow colleagues working around the world. Ideally about 25% of an urban area should be kept open or with parks, fields and/or plants and vegetation. But currently Dhaka has only about 10-12% open space and these also are being filled up rapidly. Old Dhaka has only about 5% open space while the new Dhaka has about 12%. Dhaka is now one of the most populated cities in the world. Accommodating this huge influx of people Dhaka is continuously losing its open space. As a result, children and teenagers are losing their interest in playing games and making friends, rather they are mostly occupied by television, gadgets and social media. It has been known from the interview that only 28% of teenagers regularly play. But the majority of them have to play on the street and rooftop for the lack of open space. On an average they are occupied with electronic devices for 8.3 hours/day. 64% of them has chronic diseases and often visit doctors. Most shockingly 35% of them claimed for not having any friends. Green space offers relief from stress. Areas of natural environment in towns and cities are theoretically seen providing setting for recovery and recuperation from anxiety and strains of the urban environment. Good quality green spaces encourage people to walk, run, cycle and play. Green spaces improve air quality and reduce noise, while trees and shrubbery help to filter out dust and pollutants. Relaxation, contemplation and passive recreation are essential to stress management. All city governments that are losing its open spaces should immediately pay attention to this aesthetic issue for the benefit of urban people. All kinds of development must be sustainable both for human being and nature.

Keywords: greenery, health, human, urban

Procedia PDF Downloads 141
232 Introducing Information and Communication Technologies in Prison: A Proposal in Favor of Social Reintegration

Authors: Carmen Rocio Fernandez Diaz

Abstract:

This paper focuses on the relevance of information and communication technologies (hereinafter referred as ‘ICTs’) as an essential part of the day-to-day life of all societies nowadays, as they offer the scenario where an immense number of behaviors are performed that previously took place in the physical world. In this context, areas of reality that have remained outside the so-called ‘information society’ are hardly imaginable. Nevertheless, it is possible to identify a means that continue to be behind this reality, and it is the penitentiary area regarding inmates rights, as security aspects in prison have already be improved by new technologies. Introducing ICTs in prisons is still a matter subject to great rejections. The study of comparative penitentiary systems worldwide shows that most of them use ICTs only regarding educational aspects of life in prison and that communications with the outside world are generally based on traditional ways. These are only two examples of the huge range of activities where ICTs can carry positive results within the prison. Those positive results have to do with the social reintegration of persons serving a prison sentence. Deprivation of liberty entails contact with the prison subculture and the harmful effects of it, causing in cases of long-term sentences the so-called phenomenon of ‘prisonization’. This negative effect of imprisonment could be reduced if ICTs were used inside prisons in the different areas where they can have an impact, and which are treated in this research, as (1) access to information and culture, (2) basic and advanced training, (3) employment, (4) communication with the outside world, (5) treatment or (6) leisure and entertainment. The content of all of these areas could be improved if ICTs were introduced in prison, as it is shown by the experience of some prisons of Belgium, United Kingdom or The United States. However, rejections to introducing ICTs in prisons obey to the fact that it could carry also risks concerning security and the commission of new offences. Considering these risks, the scope of this paper is to offer a real proposal to introduce ICTs in prison, trying to avoid those risks. This enterprise would be done to take advantage of the possibilities that ICTs offer to all inmates in order to start to build a life outside which is far from delinquency, but mainly to those inmates who are close to release. Reforming prisons in this sense is considered by the author of this paper an opportunity to offer inmates a progressive resettlement to live in freedom with a higher possibility to obey the law and to escape from recidivism. The value that new technologies would add to education, employment, communications or treatment to a person deprived of liberty constitutes a way of humanization of prisons in the 21st century.

Keywords: deprivation of freedom, information and communication technologies, imprisonment, social reintegration

Procedia PDF Downloads 131
231 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon

Authors: Jeffrey A. Amelse

Abstract:

Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.

Keywords: carbon dioxide, net zero, sequestration, biomass, leaves

Procedia PDF Downloads 95
230 Performance Estimation of Small Scale Wind Turbine Rotor for Very Low Wind Regime Condition

Authors: Vilas Warudkar, Dinkar Janghel, Siraj Ahmed

Abstract:

Rapid development experienced by India requires huge amount of energy. Actual supply capacity additions have been consistently lower than the targets set by the government. According to World Bank 40% of residences are without electricity. In 12th five year plan 30 GW grid interactive renewable capacity is planned in which 17 GW is Wind, 10 GW is from solar and 2.1 GW from small hydro project, and rest is compensated by bio gas. Renewable energy (RE) and energy efficiency (EE) meet not only the environmental and energy security objectives, but also can play a crucial role in reducing chronic power shortages. In remote areas or areas with a weak grid, wind energy can be used for charging batteries or can be combined with a diesel engine to save fuel whenever wind is available. India according to IEC 61400-1 belongs to class IV Wind Condition; it is not possible to set up wind turbine in large scale at every place. So, the best choice is to go for small scale wind turbine at lower height which will have good annual energy production (AEP). Based on the wind characteristic available at MANIT Bhopal, rotor for small scale wind turbine is designed. Various Aero foil data is reviewed for selection of airfoil in the Blade Profile. Airfoil suited of Low wind conditions i.e. at low Reynold’s number is selected based on Coefficient of Lift, Drag and angle of attack. For designing of the rotor blade, standard Blade Element Momentum (BEM) Theory is implanted. Performance of the Blade is estimated using BEM theory in which axial induction factor and angular induction factor is optimized using iterative technique. Rotor performance is estimated for particular designed blade specifically for low wind Conditions. Power production of rotor is determined at different wind speeds for particular pitch angle of the blade. At pitch 15o and velocity 5 m/sec gives good cut in speed of 2 m/sec and power produced is around 350 Watts. Tip speed of the Blade is considered as 6.5 for which Coefficient of Performance of the rotor is calculated 0.35, which is good acceptable value for Small scale Wind turbine. Simple Load Model (SLM, IEC 61400-2) is also discussed to improve the structural strength of the rotor. In SLM, Edge wise Moment and Flap Wise moment is considered which cause bending stress at the root of the blade. Various Load case mentioned in the IEC 61400-2 is calculated and checked for the partial safety factor of the wind turbine blade.

Keywords: annual energy production, Blade Element Momentum Theory, low wind Conditions, selection of airfoil

Procedia PDF Downloads 316
229 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems

Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue

Abstract:

The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.

Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure

Procedia PDF Downloads 295
228 Strategies for Improving and Sustaining Quality in Higher Education

Authors: Anshu Radha Aggarwal

Abstract:

Higher Education (HE) in the India has experienced a series of remarkable changes over the last fifteen years as successive governments have sought to make the sector more efficient and more accountable for investment of public funds. Rapid expansion in student numbers and pressures to widen Participation amongst non-traditional students are key challenges facing HE. Learning outcomes can act as a benchmark for assuring quality and efficiency in HE and they also enable universities to describe courses in an unambiguous way so as to demystify (and open up) education to a wider audience. This paper examines how learning outcomes are used in HE and evaluates the implications for curriculum design and student learning. There has been huge expansion in the field of higher education, both technical and non-technical, in India during the last two decades, and this trend is continuing. It is expected that another about 400 colleges and 300 universities will be created by the end of the 13th Plan Period. This has lead to many concerns about the quality of education and training of our students. Many studies have brought the issues ailing our curricula, delivery, monitoring and assessment. Govt. of India, (via MHRD, UGC, NBA,…) has initiated several steps to bring improvement in quality of higher education and training, such as National Skills Qualification Framework, making accreditation of institutions mandatory in order to receive Govt. grants, and so on. Moreover, Outcome-based Education and Training (OBET) has also been mandated and encouraged in the teaching/learning institutions. MHRD, UGC and NBAhas made accreditation of schools, colleges and universities mandatory w.e.f Jan 2014. Outcome-based Education and Training (OBET) approach is learner-centric, whereas the traditional approach has been teacher-centric. OBET is a process which involves the re-orientation/restructuring the curriculum, implementation, assessment/measurements of educational goals, and achievement of higher order learning, rather than merely clearing/passing the university examinations. OBET aims to bring about these desired changes within the students, by increasing knowledge, developing skills, influencing attitudes and creating social-connect mind-set. This approach has been adopted by several leading universities and institutions around the world in advanced countries. Objectives of this paper is to highlight the issues concerning quality in higher education and quality frameworks, to deliberate on the various education and training models, to explain the outcome-based education and assessment processes, to provide an understanding of the NAAC and outcome-based accreditation criteria and processes and to share best-practice outcomes-based accreditation system and process.

Keywords: learning outcomes, curriculum development, pedagogy, outcome based education

Procedia PDF Downloads 492
227 A Method and System for Container Inventory Management

Authors: Lalith Edirisinghe

Abstract:

Due to the variability in global trading patterns, some ports in the world experience a shortage of shipping containers while the rest of the ports have excess container stocks. According to this study, carriers operate and manage their container inventories independently, leading to enormous container repositioning costs. In contrast, the researcher suggests that costs can be minimized if carriers exchange containers among them. In other words, rather than repositioning excess containers, a carrier could offer them to another carrier in the same port that has a shortage and vice versa. However, this is easier said than done because there is huge complexity in global container management as it involves many operational parameters such as multiple types and sizes of containers, the varying transit times of different carriers, etc., and the exchange may take place in various ports globally. Therefore, the exchange should be facilitated through a fully comprehensive automated computer system that could consider all the parameters that impact the possibility of exchange containers. Accordingly, the research used mixed research methods, combining qualitative and quantitative approaches. Data analysis is conducted using SPSS tools, and a prototype is developed as the output of the research. The proposed mathematical solution will proactively scan through the container size, type, and volume of every member carrier in each port and map how the deficit and excess quantities could be shared among them and set off the imbalance of empty container reposition at ports of their interest. The approach includes obtaining and processing container inventory information from multiple parties in real time for assessing container data associated with each party for each port at a given time. Using the container data, container inventories for each party at each port for a defined time are forecasted. A first party having surplus (offeror) and deficit (offeree) of empty containers at a first and a second port at a first time, respectively, is determined. A second party having a deficit and surplus of empty containers at the first time, respectively, is determined. Offering the first and the second party a container exchange opportunity to enable the first party to supply surplus empty containers to the second party at the first port based on the first container characteristics and the second party to supply surplus empty containers to the first party at the second port based on the second container characteristics. After the offeree obtains containers, they will be shipped to a port determined by the exporters. To ensure the sustainability of this method, the system should provide equal benefits to both the offeror and the offeree. Accordingly, the system will consider not only the number of containers exchanged but also the duration the offeree may hold them in its custody. This reduces container repositioning costs by utilizing mathematical modeling, algorithms, big data, machine learning, and artificial intelligence. This method and system may reduce the container repositioning cost by twenty percent.

Keywords: container inventory, benefit of exchange, reposition, imbalance, shipping, carriers, offeree, offeror

Procedia PDF Downloads 39
226 MANIFEST-2, a Global, Phase 3, Randomized, Double-Blind, Active-Control Study of Pelabresib (CPI-0610) and Ruxolitinib vs. Placebo and Ruxolitinib in JAK Inhibitor-Naïve Myelofibrosis Patients

Authors: Claire Harrison, Raajit K. Rampal, Vikas Gupta, Srdan Verstovsek, Moshe Talpaz, Jean-Jacques Kiladjian, Ruben Mesa, Andrew Kuykendall, Alessandro Vannucchi, Francesca Palandri, Sebastian Grosicki, Timothy Devos, Eric Jourdan, Marielle J. Wondergem, Haifa Kathrin Al-Ali, Veronika Buxhofer-Ausch, Alberto Alvarez-Larrán, Sanjay Akhani, Rafael Muñoz-Carerras, Yury Sheykin, Gozde Colak, Morgan Harris, John Mascarenhas

Abstract:

Myelofibrosis (MF) is characterized by bone marrow fibrosis, anemia, splenomegaly and constitutional symptoms. Progressive bone marrow fibrosis results from aberrant megakaryopoeisis and expression of proinflammatory cytokines, both of which are heavily influenced by bromodomain and extraterminal domain (BET)-mediated gene regulation and lead to myeloproliferation and cytopenias. Pelabresib (CPI-0610) is an oral small-molecule investigational inhibitor of BET protein bromodomains currently being developed for the treatment of patients with MF. It is designed to downregulate BET target genes and modify nuclear factor kappa B (NF-κB) signaling. MANIFEST-2 was initiated based on data from Arm 3 of the ongoing Phase 2 MANIFEST study (NCT02158858), which is evaluating the combination of pelabresib and ruxolitinib in Janus kinase inhibitor (JAKi) treatment-naïve patients with MF. Primary endpoint analyses showed splenic and symptom responses in 68% and 56% of 84 enrolled patients, respectively. MANIFEST-2 (NCT04603495) is a global, Phase 3, randomized, double-blind, active-control study of pelabresib and ruxolitinib versus placebo and ruxolitinib in JAKi treatment-naïve patients with primary MF, post-polycythemia vera MF or post-essential thrombocythemia MF. The aim of this study is to evaluate the efficacy and safety of pelabresib in combination with ruxolitinib. Here we report updates from a recent protocol amendment. The MANIFEST-2 study schema is shown in Figure 1. Key eligibility criteria include a Dynamic International Prognostic Scoring System (DIPSS) score of Intermediate-1 or higher, platelet count ≥100 × 10^9/L, spleen volume ≥450 cc by computerized tomography or magnetic resonance imaging, ≥2 symptoms with an average score ≥3 or a Total Symptom Score (TSS) of ≥10 using the Myelofibrosis Symptom Assessment Form v4.0, peripheral blast count <5% and Eastern Cooperative Oncology Group performance status ≤2. Patient randomization will be stratified by DIPSS risk category (Intermediate-1 vs Intermediate-2 vs High), platelet count (>200 × 10^9/L vs 100–200 × 10^9/L) and spleen volume (≥1800 cm^3 vs <1800 cm^3). Double-blind treatment (pelabresib or matching placebo) will be administered once daily for 14 consecutive days, followed by a 7 day break, which is considered one cycle of treatment. Ruxolitinib will be administered twice daily for all 21 days of the cycle. The primary endpoint is SVR35 response (≥35% reduction in spleen volume from baseline) at Week 24, and the key secondary endpoint is TSS50 response (≥50% reduction in TSS from baseline) at Week 24. Other secondary endpoints include safety, pharmacokinetics, changes in bone marrow fibrosis, duration of SVR35 response, duration of TSS50 response, progression-free survival, overall survival, conversion from transfusion dependence to independence and rate of red blood cell transfusion for the first 24 weeks. Study recruitment is ongoing; 400 patients (200 per arm) from North America, Europe, Asia and Australia will be enrolled. The study opened for enrollment in November 2020. MANIFEST-2 was initiated based on data from the ongoing Phase 2 MANIFEST study with the aim of assessing the efficacy and safety of pelabresib and ruxolitinib in JAKi treatment-naïve patients with MF. MANIFEST-2 is currently open for enrollment.

Keywords: CPI-0610, JAKi treatment-naïve, MANIFEST-2, myelofibrosis, pelabresib

Procedia PDF Downloads 161
225 Revival and Protection of Traditional Jewellery Motifs of Assam (India), over Eri Silk by Innovative Techniques

Authors: Ratna Sharma, Kaveri Dutta

Abstract:

Assam (India), the gate way to the Northeast India is mainly known for its exquisite silks, the art and craft. The state has a rich collection of traditional jewellery which is unique and exclusive to the state. These jewelleries hold a special place in the heart of the Assamese women. Similarly handloom industry of Assam is basically silk oriented. Among the wild silk, Eri silk fabric has remained as “the poor man’s silk” but it is closely attached to the assamese society, dress for it's warm quality. In view of the changing market trends, fashion and consumer demands, Silk is emerging as a fashion fabric both in India and abroad. In case of Eri silk fabric it has limited use in clothing and accessories. Hence the restructured and redesigned traditional jewellery motifs of Assam (India) over Eri silk products will have greater potential in reviving the decline of art, generate revenue, self employment towards craftsmen and also recognition of the art. The information incorporated in the paper is primary and the data have been collected by purposive sampling method. This work of art was expressed on Eri silk fabric in the form of traditional hand embroidery as it is closely connected with the era of the individual in history of mankind and reflects the personal expression of an entity. For this study selected traditional motifs of Assamese ornaments was used. Some of the popular traditional Assamese jewellery include earrings with exquisite Lokaparo, Keru, Thuriya, Jangphai, etc. An array of necklaces including Golpata, Satsori, Jon biri, Bena, Gejera, Dhol biri, Doog doogi, Biri Moni, Mukuta Moni, Poalmoni, Silikha Moni and Magardana and diversified rings including Senpata, Horinsakua, Jethinejia, bakharpata and others. Selected two motifs each from necklace, earring and finger ring designs. Selected motifs were further developed into 3 categories- the border, the main motif and all over butta followed by placement of developed patterns on products. Products developed were stoles, scarf’s, purses, brooch pins, skirts for women and ties, handkerchief, jackets for men. The developed products were surveyed by selected respondents. From the present study it can be observed that the embellished traditional jewellery motifs resulted in fresh and colourful pattern on developed Eri silk products. Moreover the motifs which were gradually fading among the community itself showed a very good recognition towards art. The embroidered Eri silk fabric also created a huge change in a positive way among craftsman.

Keywords: Art and craft of Assam, eri silk, hand embroidery, traditional Assamese jewellery motifs

Procedia PDF Downloads 616
224 A Conceptual Framework of the Individual and Organizational Antecedents to Knowledge Sharing

Authors: Muhammad Abdul Basit Memon

Abstract:

The importance of organizational knowledge sharing and knowledge management has been documented in numerous research studies in available literature, since knowledge sharing has been recognized as a founding pillar for superior organizational performance and a source of gaining competitive advantage. Built on this, most of the successful organizations perceive knowledge management and knowledge sharing as a concern of high strategic importance and spend huge amounts on the effective management and sharing of organizational knowledge. However, despite some very serious endeavors, many firms fail to capitalize on the benefits of knowledge sharing because of being unaware of the individual characteristics, interpersonal, organizational and contextual factors that influence knowledge sharing; simply the antecedent to knowledge sharing. The extant literature on antecedents to knowledge sharing, offers a range of antecedents mentioned in a number of research articles and research studies. Some of the previous studies about antecedents to knowledge sharing, studied antecedents to knowledge sharing regarding inter-organizational knowledge transfer; others focused on inter and intra organizational knowledge sharing and still others investigated organizational factors. Some of the organizational antecedents to KS can relate to the characteristics and underlying aspects of knowledge being shared e.g., specificity and complexity of the underlying knowledge to be transferred; others relate to specific organizational characteristics e.g., age and size of the organization, decentralization and absorptive capacity of the firm and still others relate to the social relations and networks of organizations such as social ties, trusting relationships, and value systems. In the same way some researchers have highlighted on only one aspect like organizational commitment, transformational leadership, knowledge-centred culture, learning and performance orientation and social network-based relationships in the organizations. A bulk of the existing research articles on antecedents to knowledge sharing has mainly discussed organizational or environmental factors affecting knowledge sharing. However, the focus, later on, shifted towards the analysis of individuals or personal determinants as antecedents for the individual’s engagement in knowledge sharing activities, like personality traits, attitude and self efficacy etc. For example, employees’ goal orientations (i.e. learning orientation or performance orientation is an important individual antecedent of knowledge sharing behaviour. While being consistent with the existing literature therefore, the antecedents to knowledge sharing can be classified as being individual and organizational. This paper is an endeavor to discuss a conceptual framework of the individual and organizational antecedents to knowledge sharing in the light of the available literature and empirical evidence. This model not only can help in getting familiarity and comprehension on the subject matter by presenting a holistic view of the antecedents to knowledge sharing as discussed in the literature, but can also help the business managers and especially human resource managers to find insights about the salient features of organizational knowledge sharing. Moreover, this paper can help provide a ground for research students and academicians to conduct both qualitative as well and quantitative research and design an instrument for conducting survey on the topic of individual and organizational antecedents to knowledge sharing.

Keywords: antecedents to knowledge sharing, knowledge management, individual and organizational, organizational knowledge sharing

Procedia PDF Downloads 291
223 Development of a Bus Information Web System

Authors: Chiyoung Kim, Jaegeol Yim

Abstract:

Bus service is often either main or the only public transportation available in cities. In metropolitan areas, both subways and buses are available whereas in the medium sized cities buses are usually the only type of public transportation available. Bus Information Systems (BIS) provide current locations of running buses, efficient routes to travel from one place to another, points of interests around a given bus stop, a series of bus stops consisting of a given bus route, and so on to users. Thanks to BIS, people do not have to waste time at a bus stop waiting for a bus because BIS provides exact information on bus arrival times at a given bus stop. Therefore, BIS does a lot to promote the use of buses contributing to pollution reduction and saving natural resources. BIS implementation costs a huge amount of budget as it requires a lot of special equipment such as road side equipment, automatic vehicle identification and location systems, trunked radio systems, and so on. Consequently, medium and small sized cities with a low budget cannot afford to install BIS even though people in these cities need BIS service more desperately than people in metropolitan areas. It is possible to provide BIS service at virtually no cost under the assumption that everybody carries a smartphone and there is at least one person with a smartphone in a running bus who is willing to reveal his/her location details while he/she is sitting in a bus. This assumption is usually true in the real world. The smartphone penetration rate is greater than 100% in the developed countries and there is no reason for a bus driver to refuse to reveal his/her location details while driving. We have developed a mobile app that periodically reads values of sensors including GPS and sends GPS data to the server when the bus stops or when the elapsed time from the last send attempt is greater than a threshold. This app detects the bus stop state by investigating the sensor values. The server that receives GPS data from this app has also been developed. Under the assumption that the current locations of all running buses collected by the mobile app are recorded in a database, we have also developed a web site that provides all kinds of information that most BISs provide to users through the Internet. The development environment is: OS: Windows 7 64bit, IDE: Eclipse Luna 4.4.1, Spring IDE 3.7.0, Database: MySQL 5.1.7, Web Server: Apache Tomcat 7.0, Programming Language: Java 1.7.0_79. Given a start and a destination bus stop, it finds a shortest path from the start to the destination using the Dijkstra algorithm. Then, it finds a convenient route considering number of transits. For the user interface, we use the Google map. Template classes that are used by the Controller, DAO, Service and Utils classes include BUS, BusStop, BusListInfo, BusStopOrder, RouteResult, WalkingDist, Location, and so on. We are now integrating the mobile app system and the web app system.

Keywords: bus information system, GPS, mobile app, web site

Procedia PDF Downloads 190
222 Consultation Time and Its Impact on Length of Stay in the Emergency Department

Authors: Esam Roshdy, Saleh AlRashdi, Turki Alharbi, Rawan Eskandarani, Zurina Cabilo

Abstract:

Introduction/ background: Consultation in the Emergency Department constitute a major part of the work flow every day. Any delay in the consultation process have a major impact on the length of stay and patient disposition and thus affect the total waiting time of patients in the ED. King Fahad medical City in Riyadh City, Saudi Arabia is considered a major Tertiary hospital where there is high flow of patients of different categories visiting the ED. The importance of decreasing consultation time and decision for final disposition of patients was recognized and interpreted in this project to find ways to improve the patient flow in the department and thus the total patient disposition and outcome. Aim / Objectives: 1. To monitor the time of consultation for patients in the Emergency department and its impact on the length of stay of patients in the ED. 2. To detect and assess the problems that lead to long consultation times in the ED, and reach a targeted time of 2 hours for final disposition of patients, according to recognized international and our institutional consultation policy, to reach the final goal of decreasing total length of stay and thus improve the patient flow in the ED. Methods: Data was collected retrospectively for a 92 charts of consultations done in the ED over 2 month’s period. The data was analyzed to get the median of Total Consultation Time. A survey was conducted among all ED staff to determine the level of knowledge about the total consultation time and the compliance to the institutional policy target of 2 hours. A second Data sample of 168 chart was collected after awareness campaign and education of all ED staff about the importance of reaching the target consultation time and compliance to the institutional policy. Results: We have found that there is room for improvement in our overall consultation time. This was found to be more frequent with certain specialties. Our surveys have showed that many ED staff are not familiar or not compliant with our consultation policy which was not clear for everyone. Post-intervention data have showed that awareness of the importance to decrease the total consultation time and compliance alone to the targeted goal have had a huge impact on overall improvement and decreasing the time of final decision and disposition of the patient and the overall patient length of stay in the ED. Conclusion: Working on improving Consultation time in the Emergency Department is a major factor in improving overall length of stay and patient flow. This improvement helps in the overall patient disposition and satisfaction. Plan: As a continuation of our project we are planning to focus on the conflict of admission cases where more than one specialty is involved in the care of patients. We are planning to collect data on the time it takes to resolve and reach final disposition of those patients, and its impact on the length of stay and our department flow and the overall patient outcome and satisfaction.

Keywords: consultation time, impact, length of stay, in the ED

Procedia PDF Downloads 265
221 Political Party Mobilization Strategies in Ghana: A Comparative Analysis of Three Constituencies

Authors: F. Agbele

Abstract:

Elections are core democratic institutions. Consequently, voter participation during elections is paramount to democratic governance as it serves as a medium to legitimize authority and make the privileges of electoral democracy meaningful to citizens. To this effect, the topic of voter mobilization and subsequent turnout level have been largely studied in advanced democracies. In young and consolidating democracies, the debate has, however, revolves around the huge reliance on ethnic and regional appeals. According to the Author’s knowledge, studies on electoral mobilization especially within the African context have argued the use of ethnic linkages by political parties to mobilize voters during elections. Literature has however not differentiated between the level of democratic dispensation among African countries and the use of ethnic linkages. The question, however, is whether the state of the country’s democracy determines the strategies employed by political parties to induce voter participation. In other words, do parties simply play ethno-regional cards as strongly suggested by literature or will consider an arrayed of strategies to mobilize voters? Additionally, studies have not differentiated the impact of mobilization strategy within a country, i.e. between high to low turnout areas. They have also not distinguished between strategies employed by an incumbent or an opposition party. This paper, therefore, is a comparative analysis of voter mobilization in Ghana. It uses original survey and interview data from three constituencies in Ghana: Nanton, Assin North, and Ellembelle, which are typical cases of high, average and low turnout areas, respectively. The data were concurrently collected during fieldworks conducted in November 2016 to February 2017, and again from July to August 2017. The study found that political parties within a consolidating democracy employ a blend of strategies to ensure turnout by both parties’ faithful and swing voters. The dominant strategies used depends on whether the party is an incumbent or in opposition. While an incumbent may depend more on personalistic and clientelistic strategies, parties in opposition will largely use programmatic strategies, which entails making many campaign promises. Additionally, opposition parties do use clientelistic tactics, but not on the same level as the incumbent. Similarly, within the context of this study, the use of ethnic linkage by political parties to mobilize voters has not been found to be as strong as suggested in the literature. Further, location was key in determining the strategy to use. In all, the consolidation process of a democratic country like Ghana means the change of mobilization strategies used by political parties, which entail a gradual shift from ethnic linkages to programmatic and other forms of non-programmatic strategies.

Keywords: comparative analysis, elections, mobilization strategies, voter turnout

Procedia PDF Downloads 144
220 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health

Authors: Minna Pikkarainen, Yueqiang Xu

Abstract:

The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.

Keywords: blockchain, health data, platform, action design

Procedia PDF Downloads 74
219 Counter-Terrorism and De-Radicalization as Soft Strategies in Combating Terrorism in Indonesia: A Critical Review

Authors: Tjipta Lesmana

Abstract:

Terrorist attacks quickly penetrated Indonesia following the downfall of Soeharto regime in May 1998. Reform era was officially proclaimed. Indonesia turned to 'heaven state' from 'authoritarian state'. For the first time since 1966, the country experienced a full-scale freedom of expression, including freedom of the press, and heavy acknowledgement of human rights practice. Some religious extremists previously run away to neighbor countries to escape from security apparatus secretly backed home. Quickly they consolidated the power to continue their long aspiration and dream to establish 'Shariah Indonesia', Indonesia based on Khilafah ideology. Bali bombings I which shocked world community occurred on 12 October 2002 in the famous tourist district of Kuta on the Indonesian island of Bali, killing 202 people (including 88 Australians, 38 Indonesians, and people from more than 20 other nationalities). In the capital, Jakarta, successive bombings were blasted in Marriott hotel, Australian Embassy, residence of the Philippine Ambassador and stock exchange office. A 'drunken Indonesia' is far from ready to combat nationwide sudden and massive terrorist attacks. Police Detachment 88 (Densus 88) Indonesian counter-terrorism squad, was quickly formed following 2002 Bali Bombing. Anti-terrorism Provisional Act was immediately erected, as well, due to urgent need to fight terrorism. Some Bali bombings criminals were deadly executed after sentenced by the court. But a series of terrorist suicide attacks and another Bali bombings (the second one) in Bali, again, shocked world community. Terrorism network is undoubtedly spreading nationwide. Suspicion is high that they had close connection with Al Qaeda’s groups. Even 'Afghanistan alumni' and 'Syria alumni' returned to Indonesia to back up the local mujahidins in their fights to topple Indonesia constitutional government and set up Islamic state (Khilafah). Supported by massive aids from friendly nations, especially Australia and United States, Indonesia launched large scale operations to crush terrorism consisted of various radical groups such as JAD, JAS, and JAADI. Huge energy, money, and souls were dedicated. Terrorism is, however, persistently entrenched. High ranking officials from Detachment 88 squad and military intelligence believe that terrorism is still one the most deadly enemy of Indonesia.

Keywords: counter-radicalization, de-radicalization, Khalifah, Union State, Al Qaedah, ISIS

Procedia PDF Downloads 152
218 Comparison Between Two Techniques (Extended Source to Surface Distance & Field Alignment) Of Craniospinal Irradiation (CSI) In the Eclipse Treatment Planning System

Authors: Naima Jannat, Ariful Islam, Sharafat Hossain

Abstract:

Due to the involvement of the large target volume, Craniospinal Irradiation makes it challenging to achieve a uniform dose, and it requires different isocenters. This isocentric junction needs to shift after every five fractions to overcome the possibility of hot and cold spots. This study aims to evaluate the Planning Target Volume coverage & sparing Organ at Risk between two techniques and shows that the Field Alignment Technique does not need replanning and resetting. Planning method for Craniospinal Irradiation by Eclipse treatment planning system Field Alignment and Extended Source to Surface Distance technique was developed where 36 Gy in 20 Fraction at the rate of 1.8 Gy was prescribed. The patient was immobilized in the prone position. In the Field Alignment technique, the plan consists of half beam blocked parallel opposed cranium and a single posterior cervicospine field was developed by sharing the same isocenter, which obviates divergence matching. Further, a single field was created to treat the remaining lumbosacral spine. Matching between the inferior diverging edge of the cervicospine field and the superior diverging edge of a lumbosacral field, the field alignment option was used, which automatically matches the field edge divergence as per the field alignment rule in Eclipse Treatment Planning System where the couch was set to 2700. In the Extended Source to Surface Distance technique, two parallel opposed fields were created for the cranium, and a single posterior cervicospine field was created where the Source to Surface Distance was from 120-140 cm. Dose Volume Histograms were obtained for each organ contoured and for each technique used. In all, the patient’s maximum dose to Planning Target Volume is higher for the Extended Source to Surface Distance technique to Field Alignment technique. The dose to all surrounding structures was increased with the use of a single Extended Source to Surface Distance when compared to the Field Alignment technique. The average mean dose to Eye, Brain Steam, Kidney, Oesophagus, Heart, Liver, Lung, and Ovaries were respectively (58% & 60 %), (103% & 98%), (13% & 15%), (10% & 63%), (12% & 16%), (33% & 30%), (14% & 18%), (69% & 61%) for Field Alignment and Extended Source to Surface Distance technique. However, the clinical target volume at the spine junction site received a less homogeneous dose with the Field Alignment technique as compared to Extended Source to Surface Distance. We conclude that, although the use of a single field Extended Source to Surface Distance delivered a more homogenous, but its maximum dose is higher than the Field Alignment technique. Also, a huge advantage of the Field Alignment technique for Craniospinal Irradiation is that it doesn’t need replanning and resetting up of patients after every five fractions and 95% prescribed dose was received by more than 95% of the Planning Target Volume in all the plane with the acceptable hot spot.

Keywords: craniospinalirradiation, cranium, cervicospine, immobilize, lumbosacral spine

Procedia PDF Downloads 76
217 An Investigation on Opportunities and Obstacles on Implementation of Building Information Modelling for Pre-fabrication in Small and Medium Sized Construction Companies in Germany: A Practical Approach

Authors: Nijanthan Mohan, Rolf Gross, Fabian Theis

Abstract:

The conventional method used in the construction industries often resulted in significant rework since most of the decisions were taken onsite under the pressure of project deadlines and also due to the improper information flow, which results in ineffective coordination. However, today’s architecture, engineering, and construction (AEC) stakeholders demand faster and accurate deliverables, efficient buildings, and smart processes, which turns out to be a tall order. Hence, the building information modelling (BIM) concept was developed as a solution to fulfill the above-mentioned necessities. Even though BIM is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. Due to the huge capital requirement, the small and medium-sized construction companies are still reluctant to implement BIM workflow in their projects. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, pre-fabrication is chosen for this paper because it plays a vital role in creating an impact on time as well as cost factors of a construction project. The positive impact of prefabrication can be explicitly observed by the project stakeholders and participants, which enables the breakthrough of the skepticism factor among the small scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction, followed by a practical approach, which was executed with two case studies. The first case study represents on-site prefabrication, and the second was done for off-site prefabrication. It was planned in such a way that the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the cost and time analysis was made, and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal or no wastes, better accuracy, less problem-solving at the construction site. It is also observed that this process requires more planning time, better communication, and coordination between different disciplines such as mechanical, electrical, plumbing, architecture, etc., which was the major obstacle for successful implementation. This paper was carried out in the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.

Keywords: building information modelling, construction wastes, pre-fabrication, small and medium sized company

Procedia PDF Downloads 89
216 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID

Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis

Abstract:

Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.

Keywords: artificial intelligence, COVID, neural network, machine learning

Procedia PDF Downloads 66
215 Social Enterprises in India: Conceptualization and Challenges

Authors: Prajakta Khare

Abstract:

There is a huge number of social enterprises operating in India, across all enterprise sizes and forms addressing diverse social issues. Some cases such as such as Aravind eye care, Narayana Hridalaya, SEWA have been studied extensively in management literature and are known cases in social entrepreneurship. But there are several smaller social enterprises in India that are not called so per se due to the lack of understanding of the concept. There is a lack of academic research on social entrepreneurship in India and the term ‘social entrepreneurship’ is not yet widely known in the country, even by people working in this field as was found by this study. The present study aims to identify the most prominent form of social enterprises in India, the profile of the entrepreneurs, challenges faced, the lessons (theory and practices) emerging from their functioning and finally the factors contributing to the enterprises’ success. This is a preliminary exploratory study using primary data from 30 social enterprises in India. The study used snow ball sampling and a qualitative analysis. Data was collected from founders of social enterprises through written structured questionnaires, open-ended interviews and field visits to enterprises. The sample covered enterprises across sectors such as environment, affordable education, children’s rights, rain water harvesting, women empowerment etc. The interview questions focused on founder’s background and motivation, qualifications, funding, challenges, founder’s understanding and perspectives on social entrepreneurship, government support, linkages with other organizations etc. apart from several others. The interviews were conducted across 3 languages - Hindi, Marathi, English and were then translated and transcribed. 50% of founders were women and 65% of the total founders were highly qualified with a MBA, PhD or MBBS. The most important challenge faced by these entrepreneurs is recruiting skilled people. When asked about their understanding of the term, founders had diverse perspectives. Also, their understandings about the term social enterprise and social entrepreneur were extremely varied. Some founders identified the terms with doing something good for the society, some thought that every business can be called a social enterprise. 35% of the founders were not aware of the term social entrepreneur/ social entrepreneurship. They said that they could identify themselves as social entrepreneurs after discussions with the researcher. The general perception in India is that ‘NGOs are corrupt’- fighting against this perception to secure funds is also another problem as pointed out by some founders. There are unique challenges that social entrepreneurs in India face, as the political, social, economic environment around them is rapidly changing; and getting adequate support from the government is a problem. The research in its subsequent stages aims to clarify existing, missing and new definitions of the term to provide deeper insights in the terminology and issues relating to Social Entrepreneurship in India.

Keywords: challenges, India, social entrepreneurship, social entrepreneurs

Procedia PDF Downloads 442
214 Working Towards More Sustainable Food Waste: A Circularity Perspective

Authors: Rocío González-Sánchez, Sara Alonso-Muñoz

Abstract:

Food waste implies an inefficient management of the final stages in the food supply chain. Referring to Sustainable Development Goals (SDGs) by United Nations, the SDG 12.3 proposes to halve per capita food waste at the retail and consumer level and to reduce food losses. In the linear system, food waste is disposed and, to a lesser extent, recovery or reused after consumption. With the negative effect on stocks, the current food consumption system is based on ‘produce, take and dispose’ which put huge pressure on raw materials and energy resources. Therefore, greater focus on the circular management of food waste will mitigate the environmental, economic, and social impact, following a Triple Bottom Line (TBL) approach and consequently the SDGs fulfilment. A mixed methodology is used. A total sample of 311 publications from Web of Science database were retrieved. Firstly, it is performed a bibliometric analysis by SciMat and VOSviewer software to visualise scientific maps about co-occurrence analysis of keywords and co-citation analysis of journals. This allows for the understanding of the knowledge structure about this field, and to detect research issues. Secondly, a systematic literature review is conducted regarding the most influential articles in years 2020 and 2021, coinciding with the most representative period under study. Thirdly, to support the development of this field it is proposed an agenda according to the research gaps identified about circular economy and food waste management. Results reveal that the main topics are related to waste valorisation, the application of waste-to-energy circular model and the anaerobic digestion process towards fossil fuels replacement. It is underlined that the use of food as a source of clean energy is receiving greater attention in the literature. There is a lack of studies about stakeholders’ awareness and training. In addition, available data would facilitate the implementation of circular principles for food waste recovery, management, and valorisation. The research agenda suggests that circularity networks with suppliers and customers need to be deepened. Technological tools for the implementation of sustainable business models, and greater emphasis on social aspects through educational campaigns are also required. This paper contributes on the application of circularity to food waste management by abandoning inefficient linear models. Shedding light about trending topics in the field guiding to scholars for future research opportunities.

Keywords: bibliometric analysis, circular economy, food waste management, future research lines

Procedia PDF Downloads 78
213 Unequal Traveling: How School District System and School District Housing Characteristics Shape the Duration of Families Commuting

Authors: Geyang Xia

Abstract:

In many countries, governments have responded to the growing demand for educational resources through school district systems, and there is substantial evidence that school district systems have been effective in promoting inter-district and inter-school equity in educational resources. However, the scarcity of quality educational resources has brought about varying levels of education among different school districts, making it a common choice for many parents to buy a house in the school district where a quality school is located, and they are even willing to bear huge commuting costs for this purpose. Moreover, this is evidenced by the fact that parents of families in school districts with quality education resources have longer average commute lengths and longer average commute distances than parents in average school districts. This "unequal traveling" under the influence of the school district system is more common in school districts at the primary level of education. This further reinforces the differential hierarchy of educational resources and raises issues of inequitable educational public services, education-led residential segregation, and gentrification of school district housing. Against this background, this paper takes Nanjing, a famous educational city in China, as a case study and selects the school districts where the top 10 public elementary schools are located. The study first identifies the spatio-temporal behavioral trajectory dataset of these high-quality school district households by using spatial vector data, decrypted cell phone signaling data, and census data. Then, by constructing a "house-school-work (HSW)" commuting pattern of the population in the school district where the high-quality educational resources are located, and based on the classification of the HSW commuting pattern of the population, school districts with long employment hours were identified. Ultimately, the mechanisms and patterns inherent in this unequal commuting are analyzed in terms of six aspects, including the centrality of school district location, functional diversity, and accessibility. The results reveal that the "unequal commuting" of Nanjing's high-quality school districts under the influence of the school district system occurs mainly in the peripheral areas of the city, and the schools matched with these high-quality school districts are mostly branches of prestigious schools in the built-up areas of the city's core. At the same time, the centrality of school district location and the diversity of functions are the most important influencing factors of unequal commuting in high-quality school districts. Based on the research results, this paper proposes strategies to optimize the spatial layout of high-quality educational resources and corresponding transportation policy measures.

Keywords: school-district system, high quality school district, commuting pattern, unequal traveling

Procedia PDF Downloads 66