Search results for: reduced order models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22390

Search results for: reduced order models

190 Wind Resource Classification and Feasibility of Distributed Generation for Rural Community Utilization in North Central Nigeria

Authors: O. D. Ohijeagbon, Oluseyi O. Ajayi, M. Ogbonnaya, Ahmeh Attabo

Abstract:

This study analyzed the electricity generation potential from wind at seven sites spread across seven states of the North-Central region of Nigeria. Twenty-one years (1987 to 2007) wind speed data at a height of 10m were assessed from the Nigeria Meteorological Department, Oshodi. The data were subjected to different statistical tests and also compared with the two-parameter Weibull probability density function. The outcome shows that the monthly average wind speeds ranged between 2.2 m/s in November for Bida and 10.1 m/s in December for Jos. The yearly average ranged between 2.1m/s in 1987 for Bida and 11.8 m/s in 2002 for Jos. Also, the power density for each site was determined to range between 29.66 W/m2 for Bida and 864.96 W/m2 for Jos, Two parameters (k and c) of the Weibull distribution were found to range between 2.3 in Lokoja and 6.5 in Jos for k, while c ranged between 2.9 in Bida and 9.9m/s in Jos. These outcomes points to the fact that wind speeds at Jos, Minna, Ilorin, Makurdi and Abuja are compatible with the cut-in speeds of modern wind turbines and hence, may be economically feasible for wind-to-electricity at and above the height of 10 m. The study further assessed the potential and economic viability of standalone wind generation systems for off-grid rural communities located in each of the studied sites. A specific electric load profile was developed to suite hypothetic communities, each consisting of 200 homes, a school and a community health center. Assessment of the design that will optimally meet the daily load demand with a loss of load probability (LOLP) of 0.01 was performed, considering 2 stand-alone applications of wind and diesel. The diesel standalone system (DSS) was taken as the basis of comparison since the experimental locations have no connection to a distribution network. The HOMER® software optimizing tool was utilized to determine the optimal combination of system components that will yield the lowest life cycle cost. Sequel to the analysis for rural community utilization, a Distributed Generation (DG) analysis that considered the possibility of generating wind power in the MW range in order to take advantage of Nigeria’s tariff regime for embedded generation was carried out for each site. The DG design incorporated each community of 200 homes, freely catered for and offset from the excess electrical energy generated above the minimum requirement for sales to a nearby distribution grid. Wind DG systems were found suitable and viable in producing environmentally friendly energy in terms of life cycle cost and levelised value of producing energy at Jos ($0.14/kWh), Minna ($0.12/kWh), Ilorin ($0.09/kWh), Makurdi ($0.09/kWh), and Abuja ($0.04/kWh) at a particluar turbine hub height. These outputs reveal the value retrievable from the project after breakeven point as a function of energy consumed Based on the results, the study demonstrated that including renewable energy in the rural development plan will enhance fast upgrade of the rural communities.

Keywords: wind speed, wind power, distributed generation, cost per kilowatt-hour, clean energy, North-Central Nigeria

Procedia PDF Downloads 512
189 InAs/GaSb Superlattice Photodiode Array ns-Response

Authors: Utpal Das, Sona Das

Abstract:

InAs/GaSb type-II superlattice (T2SL) Mid-wave infrared (MWIR) focal plane arrays (FPAs) have recently seen rapid development. However, in small pixel size large format FPAs, the occurrence of high mesa sidewall surface leakage current is a major constraint necessitating proper surface passivation. A simple pixel isolation technique in InAs/GaSb T2SL detector arrays without the conventional mesa etching has been proposed to isolate the pixels by forming a more resistive higher band gap material from the SL, in the inter-pixel region. Here, a single step femtosecond (fs) laser anneal of the T2SL structure of the inter-pixel T2SL regions, have been used to increase the band gap between the pixels by QW-intermixing and hence increase isolation between the pixels. The p-i-n photodiode structure used here consists of a 506nm, (10 monolayer {ML}) InAs:Si (1x10¹⁸cm⁻³)/(10ML) GaSb SL as the bottom n-contact layer grown on an n-type GaSb substrate. The undoped absorber layer consists of 1.3µm, (10ML)InAs/(10ML)GaSb SL. The top p-contact layer is a 63nm, (10ML)InAs:Be(1x10¹⁸cm⁻³)/(10ML)GaSb T2SL. In order to improve the carrier transport, a 126nm of graded doped (10ML)InAs/(10ML)GaSb SL layer was added between the absorber and each contact layers. A 775nm 150fs-laser at a fluence of ~6mJ/cm² is used to expose the array where the pixel regions are masked by a Ti(200nm)-Au(300nm) cap. Here, in the inter-pixel regions, the p+ layer have been reactive ion etched (RIE) using CH₄+H₂ chemistry and removed before fs-laser exposure. The fs-laser anneal isolation improvement in 200-400μm pixels due to spatially selective quantum well intermixing for a blue shift of ~70meV in the inter-pixel regions is confirmed by FTIR measurements. Dark currents are measured between two adjacent pixels with the Ti(200nm)-Au(300nm) caps used as contacts. The T2SL quality in the active photodiode regions masked by the Ti-Au cap is hardly affected and retains the original quality of the detector. Although, fs-laser anneal of p+ only etched p-i-n T2SL diodes show a reduction in the reverse dark current, no significant improvement in the full RIE-etched mesa structures is noticeable. Hence for a 128x128 array fabrication of 8μm square pixels and 10µm pitch, SU8 polymer isolation after RIE pixel delineation has been used. X-n+ row contacts and Y-p+ column contacts have been used to measure the optical response of the individual pixels. The photo-response of these 8μm and other 200μm pixels under a 2ns optical pulse excitation from an Optical-Parametric-Oscillator (OPO), shows a peak responsivity of ~0.03A/W and 0.2mA/W, respectively, at λ~3.7μm. Temporal response of this detector array is seen to have a fast response ~10ns followed typical slow decay with ringing, attributed to impedance mismatch of the connecting co-axial cables. In conclusion, response times of a few ns have been measured in 8µm pixels of a 128x128 array. Although fs-laser anneal has been found to be useful in increasing the inter-pixel isolation in InAs/GaSb T2SL arrays by QW inter-mixing, it has not been found to be suitable for passivation of full RIE etched mesa structures with vertical walls on InAs/GaSb T2SL.

Keywords: band-gap blue-shift, fs-laser-anneal, InAs/GaSb T2SL, Inter-pixel isolation, ns-Response, photodiode array

Procedia PDF Downloads 152
188 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis

Authors: Kimberly Samaha

Abstract:

In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.

Keywords: bio-economy, investment risk, circular design, economic modelling

Procedia PDF Downloads 101
187 Iran’s Sexual and Reproductive Rights Roll-Back: An Overview of Iran’s New Population Policies

Authors: Raha Bahreini

Abstract:

This paper discusses the roll-back of women’s sexual and reproductive rights in the Islamic Republic of Iran, which has come in the wake of a striking shift in the country’s official population policies. Since the late 1980s, Iran has won worldwide praise for its sexual and reproductive health and services, which have contributed to a steady decline in the country’s fertility rate–from 7.0 births per women in 1980 to 5.5 in 1988, 2.8 in 1996 and 1.85 in 2014. This is owed to a significant increase in the voluntary use of modern contraception in both rural and urban areas. In 1976, only 37 per cent of women were using at least one method of contraception; by 2014 this figure had reportedly risen to a high of nearly 79 per cent for married girls and women living in urban areas and 73.78 per cent for those living in rural areas. Such progress may soon be halted. In July 2012, Iran’s Supreme Leader Ayatollah Sayed Ali Khamenei denounced Iran’s family planning policies as an imitation of Western lifestyle. He exhorted the authorities to increase Iran’s population to 150 to 200 million (from around 78.5 million), including by cutting subsidies for contraceptive methods and dismantling the state’s Family and Population Planning Programme. Shortly thereafter, Iran’s Minister of Health and Medical Education announced the scrapping of the budget for the state-funded Family and Population Planning Programme. Iran’s Parliament subsequently introduced two bills; the Comprehensive Population and Exaltation of Family Bill (Bill 315), and the Bill to Increase Fertility Rates and Prevent Population Decline (Bill 446). Bill 446 outlaws voluntary tubectomies, which are believed to be the second most common method of modern contraception in Iran, and blocks access to information about contraception, denying women the opportunity to make informed decisions about the number and spacing of their children. Coupled with the elimination of state funding for Iran’s Family and Population Programme, the move would undoubtedly result in greater numbers of unwanted pregnancies, forcing more women to seek illegal and unsafe abortions. Bill 315 proposes various discriminatory measures in the areas of employment, divorce, and protection from domestic violence in order to promote a culture wherein wifedom and child-bearing is seen as women’s primary duty. The Bill, for example, instructs private and public entities to prioritize, in sequence, men with children, married men without children and married women with children when hiring for certain jobs. It also bans the recruitment of single individuals as family law lawyers, public and private school teachers and members of the academic boards of universities and higher education institutes. The paper discusses the consequences of these initiatives which would, if continued, set the human rights of women and girls in Iran back by decades, leaving them with a future shaped by increased inequality, discrimination, poor health, limited choices and restricted freedoms, in breach of Iran’s international human rights obligations.

Keywords: family planning and reproductive health, gender equality and empowerment of women, human rights, population growth

Procedia PDF Downloads 307
186 Lessons Learnt from Industry: Achieving Net Gain Outcomes for Biodiversity

Authors: Julia Baker

Abstract:

Development plays a major role in stopping biodiversity loss. But the ‘silo species’ protection of legislation (where certain species are protected while many are not) means that development can be ‘legally compliant’ and result in biodiversity loss. ‘Net Gain’ (NG) policies can help overcome this by making it an absolute requirement that development causes no overall loss of biodiversity and brings a benefit. However, offsetting biodiversity losses in one location with gains elsewhere is controversial because people suspect ‘offsetting’ to be an easy way for developers to buy their way out of conservation requirements. Yet the good practice principles (GPP) of offsetting provide several advantages over existing legislation for protecting biodiversity from development. This presentation describes the learning from implementing NG approaches based on GPP. It regards major upgrades of the UK’s transport networks, which involved removing vegetation in order to construct and safely operate new infrastructure. While low-lying habitats were retained, trees and other habitats disrupting the running or safety of transport networks could not. Consequently, achieving NG within the transport corridor was not possible and offsetting was required. The first ‘lessons learnt’ were on obtaining a commitment from business leaders to go beyond legislative requirements and deliver NG, and on the institutional change necessary to embed GPP within daily operations. These issues can only be addressed when the challenges that biodiversity poses for business are overcome. These challenges included: biodiversity cannot be measured easily unlike other sustainability factors like carbon and water that have metrics for target-setting and measuring progress; and, the mindset that biodiversity costs money and does not generate cash in return, which is the opposite of carbon or waste for example, where people can see how ‘sustainability’ actions save money. The challenges were overcome by presenting the GPP of NG as a cost-efficient solution to specific, critical risks facing the business that also boost industry recognition, and by using government-issued NG metrics to develop business-specific toolkits charting their NG progress whilst ensuring that NG decision-making was based on rich ecological data. An institutional change was best achieved by supporting, mentoring and training sustainability/environmental managers for these ‘frontline’ staff to embed GPP within the business. The second learning was from implementing the GPP where business partnered with local governments, wildlife groups and land owners to support their priorities for nature conservation, and where these partners had a say in decisions about where and how best to achieve NG. From this inclusive approach, offsetting contributed towards conservation priorities when all collaborated to manage trade-offs between: -Delivering ecologically equivalent offsets or compensating for losses of one type of biodiversity by providing another. -Achieving NG locally to the development whilst contributing towards national conservation priorities through landscape-level planning. -Not just protecting the extent and condition of existing biodiversity but ‘doing more’. -The multi-sector collaborations identified practical, workable solutions to ‘in perpetuity’. But key was strengthening linkages between biodiversity measures implemented for development and conservation work undertaken by local organizations so that developers support NG initiatives that really count.

Keywords: biodiversity offsetting, development, nature conservation planning, net gain

Procedia PDF Downloads 195
185 Government and Non-Government Policy Responses to Anti-Trafficking Initiatives: A Discursive Analysis of the Construction of the Problem of Human Trafficking in Australia and Thailand

Authors: Jessica J. Gillies

Abstract:

Human trafficking is a gross violation of human rights and thus invokes a strong response particularly throughout the global academic community. A longstanding tension throughout academic debate remains the question of a relationship between anti-trafficking policy and sex industry policy. In Australia, over the previous decade, many human trafficking investigations have related to the sexual exploitation of female victims, and convictions in Australia to date have often been for trafficking women from Thailand. Sex industry policy in Australia varies between states, providing a rich contextual landscape in which to explore this relationship. The purpose of this study was to deconstruct how meaning is constructed surrounding human trafficking throughout these supposedly related political discourses in Australia. In order to analyse the discursive construction of the problem of human trafficking in relation to sex industry policy, a discursive analysis was conducted. The methodology of the study was informed by a feminist theoretical framework, and included academic sources and grey literature such as organisational reports and policy statements regarding anti-trafficking initiatives. The scope of grey literature was restricted to Australian and Thai government and non-government organisation texts. The chosen methodology facilitated a qualitative exploration of the influence of feminist discourses over political discourse in this arena. The discursive analysis exposed clusters of active feminist debates interacting with sex industry policy within individual states throughout Australia. Additionally, strongly opposed sex industry perspectives were uncovered within these competing feminist frameworks. While the influence these groups may exert over policy differs, the debate constructs a discursive relationship between human trafficking and sex industry policy. This is problematic because anti-trafficking policy is drawn to some extent from this discursive construction, therefore affecting support services for survivors of human trafficking. The discursive analysis further revealed misalignment between government and non-government priorities, Australian government anti-trafficking policy appears to favour criminal justice priorities; whereas non-government settings preference human rights protections. Criminal justice priorities invoke questions of legitimacy, leading to strict eligibility policy for survivors seeking support following exploitation in the Australian sex industry, undermining women’s agency and human rights. In practice, these two main findings demonstrate a construction of policy that has serious outcomes on typical survivors in Australia following a lived experience of human trafficking for the purpose of sexual exploitation. The discourses constructed by conflicting feminist arguments influence political discourses throughout Australia. The application of a feminist theoretical framework to the discursive analysis of the problem of human trafficking is unique to this study. The study has exposed a longstanding and unresolved feminist debate that has filtered throughout anti-trafficking political discourse. This study illuminates the problematic construction of anti-trafficking policy, and the implications in practice on survivor support services. Australia has received international criticism for the focus on criminal justice rather than human rights throughout anti-trafficking policy discourse. The outcome of this study has the potential to inform future language and constructive conversations contributing to knowledge around how policy effects survivors in the post trafficking experience.

Keywords: Australia, discursive analysis, government, human trafficking, non-government, Thailand

Procedia PDF Downloads 119
184 Microsimulation of Potential Crashes as a Road Safety Indicator

Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale

Abstract:

Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.

Keywords: road safety, traffic, traffic safety, traffic simulation

Procedia PDF Downloads 135
183 Smart and Active Package Integrating Printed Electronics

Authors: Joana Pimenta, Lorena Coelho, José Silva, Vanessa Miranda, Jorge Laranjeira, Rui Soares

Abstract:

In this paper, the results of R&D on an innovative food package for increased shelf-life are presented. SAP4MA aims at the development of a printed active device that enables smart packaging solutions for food preservation, targeting the extension of the shelf-life of the packed food through the controlled release of active natural antioxidant agents at the onset of the food degradation process. To do so, SAP4MA focuses on the development of active devices such as printed heaters and batteries/supercapacitors in a label format to be integrated on packaging lids during its injection molding process, promoting the passive release of natural antioxidants after the product is packed, during transportation and in the shelves, and actively when the end-user activates the package, just prior to consuming the product at home. When the active device present on the lid is activated, the release of the natural antioxidants embedded in the inner layer of the packaging lid in direct contact with the headspace atmosphere of the food package starts. This approach is based on the use of active functional coatings composed of nano encapsulated active agents (natural antioxidants species) in the prevention of the oxidation of lipid compounds in food by agents such as oxygen. Thus keeping the product quality during the shelf-life, not only when the user opens the packaging, but also during the period from food packaging up until the purchase by the consumer. The active systems that make up the printed smart label, heating circuit, and battery were developed using screen-printing technology. These systems must operate under the working conditions associated with this application. The printed heating circuit was studied using three different substrates and two different conductive inks. Inks were selected, taking into consideration that the printed circuits will be subjected to high pressures and temperatures during the injection molding process. The circuit must reach a homogeneous temperature of 40ºC in the entire area of the lid of the food tub, promoting a gradual and controlled release of the antioxidant agents. In addition, the circuit design involves a high level of study in order to guarantee maximum performance after the injection process and meet the specifications required by the control electronics component. Furthermore, to characterize the different heating circuits, the electrical resistance promoted by the conductive ink and the circuit design, as well as the thermal behavior of printed circuits on different substrates, were evaluated. In the injection molding process, the serpentine-shaped design developed for the heating circuit was able to resolve the issues connected to the injection point; in addition, the materials used in the support and printing had high mechanical resistance against the pressure and temperature inherent to the injection process. Acknowledgment: This research has been carried out within the Project “Smart and Active Packing for Margarine Product” (SAP4MA) running under the EURIPIDES Program being co-financed by COMPETE 2020 – the Operational Programme for Competitiveness and Internationalization and under Portugal 2020 through the European Regional Development Fund (ERDF).

Keywords: smart package, printed heat circuits, printed batteries, flexible and printed electronic

Procedia PDF Downloads 109
182 Developing Primary Care Datasets for a National Asthma Audit

Authors: Rachael Andrews, Viktoria McMillan, Shuaib Nasser, Christopher M. Roberts

Abstract:

Background and objective: The National Review of Asthma Deaths (NRAD) found that asthma management and care was inadequate in 26% of cases reviewed. Major shortfalls identified were adherence to national guidelines and standards and, particularly, the organisation of care, including supervision and monitoring in primary care, with 70% of cases reviewed having at least one avoidable factor in this area. 5.4 million people in the UK are diagnosed with and actively treated for asthma, and approximately 60,000 are admitted to hospital with acute exacerbations each year. The majority of people with asthma receive management and treatment solely in primary care. This has therefore created concern that many people within the UK are receiving sub-optimal asthma care resulting in unnecessary morbidity and risk of adverse outcome. NRAD concluded that a national asthma audit programme should be established to measure and improve processes, organisation, and outcomes of asthma care. Objective: To develop a primary care dataset enabling extraction of information from GP practices in Wales and providing robust data by which results and lessons could be drawn and drive service development and improvement. Methods: A multidisciplinary group of experts, including general practitioners, primary care organisation representatives, and asthma patients was formed and used as a source of governance and guidance. A review of asthma literature, guidance, and standards took place and was used to identify areas of asthma care which, if improved, would lead to better patient outcomes. Modified Delphi methodology was used to gain consensus from the expert group on which of the areas identified were to be prioritised, and an asthma patient and carer focus group held to seek views and feedback on areas of asthma care that were important to them. Areas of asthma care identified by both groups were mapped to asthma guidelines and standards to inform and develop primary and secondary care datasets covering both adult and pediatric care. Dataset development consisted of expert review and a targeted consultation process in order to seek broad stakeholder views and feedback. Results: Areas of asthma care identified as requiring prioritisation by the National Asthma Audit were: (i) Prescribing, (ii) Asthma diagnosis (iii) Asthma Reviews (iv) Personalised Asthma Action Plans (PAAPs) (v) Primary care follow-up after discharge from hospital (vi) Methodologies and primary care queries were developed to cover each of the areas of poor and variable asthma care identified and the queries designed to extract information directly from electronic patients’ records. Conclusion: This paper describes the methodological approach followed to develop primary care datasets for a National Asthma Audit. It sets out the principles behind the establishment of a National Asthma Audit programme in response to a national asthma mortality review and describes the development activities undertaken. Key process elements included: (i) mapping identified areas of poor and variable asthma care to national guidelines and standards, (ii) early engagement of experts, including clinicians and patients in the process, and (iii) targeted consultation of the queries to provide further insight into measures that were collectable, reproducible and relevant.

Keywords: asthma, primary care, general practice, dataset development

Procedia PDF Downloads 175
181 On the Bias and Predictability of Asylum Cases

Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats

Abstract:

An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.

Keywords: asylum adjudications, automated decision-making, machine learning, text mining

Procedia PDF Downloads 95
180 The Immunology Evolutionary Relationship between Signal Transducer and Activator of Transcription Genes from Three Different Shrimp Species in Response to White Spot Syndrome Virus Infection

Authors: T. C. C. Soo, S. Bhassu

Abstract:

Unlike the common presence of both innate and adaptive immunity in vertebrates, crustaceans, in particular, shrimps, have been discovered to possess only innate immunity. This further emphasizes the importance of innate immunity within shrimps in pathogenic resistance. Under the study of pathogenic immune challenge, different shrimp species actually exhibit varying degrees of immune resistance towards the same pathogen. Furthermore, even within the same shrimp species, different batches of challenged shrimps can have different strengths of immune defence. Several important pathways are activated within shrimps during pathogenic infection. One of them is JAK-STAT pathway that is activated during bacterial, viral and fungal infections by which STAT(Signal Transducer and Activator of Transcription) gene is the core element of the pathway. Based on theory of Central Dogma, the genomic information is transmitted in the order of DNA, RNA and protein. This study is focused in uncovering the important evolutionary patterns present within the DNA (non-coding region) and RNA (coding region). The three shrimp species involved are Macrobrachium rosenbergii, Penaeus monodon and Litopenaeus vannamei which all possess commercial significance. The shrimp species were challenged with a famous penaeid shrimp virus called white spot syndrome virus (WSSV) which can cause serious lethality. Tissue samples were collected during time intervals of 0h, 3h, 6h, 12h, 24h, 36h and 48h. The DNA and RNA samples were then extracted using conventional kits from the hepatopancreas tissue samples. PCR technique together with designed STAT gene conserved primers were utilized for identification of the STAT coding sequences using RNA-converted cDNA samples and subsequent characterization using various bioinformatics approaches including Ramachandran plot, ProtParam and SWISS-MODEL. The varying levels of immune STAT gene activation for the three shrimp species during WSSV infection were confirmed using qRT-PCR technique. For one sample, three biological replicates with three technical replicates each were used for qRT-PCR. On the other hand, DNA samples were important for uncovering the structural variations within the genomic region of STAT gene which would greatly assist in understanding the STAT protein functional variations. The partially-overlapping primers technique was used for the genomic region sequencing. The evolutionary inferences and event predictions were then conducted through the Bayesian Inference method using all the acquired coding and non-coding sequences. This was supplemented by the construction of conventional phylogenetic trees using Maximum likelihood method. The results showed that adaptive evolution caused STAT gene sequence mutations between different shrimp species which led to evolutionary divergence event. Subsequently, the divergent sites were correlated to the differing expressions of STAT gene. Ultimately, this study assists in knowing the shrimp species innate immune variability and selection of disease resistant shrimps for breeding purpose. The deeper understanding of STAT gene evolution from the perspective of both purifying and adaptive approaches not only can provide better immunological insight among shrimp species, but also can be used as a good reference for immunological studies in humans or other model organisms.

Keywords: gene evolution, JAK-STAT pathway, immunology, STAT gene

Procedia PDF Downloads 150
179 Microplastics in Urban Environment – Coimbra City Case Study

Authors: Inês Amorim Leitão, Loes van Shaick, António Dinis Ferreira, Violette Geissen

Abstract:

Plastic pollution is a growing concern worldwide: plastics are commercialized in large quantities and it takes a long time for them to degrade. When in the environment, plastic is fragmented into microplastics (<5mm), which have been found in all environmental compartments at different locations. Microplastics contribute to the environmental pollution in water, air and soil and are linked to human health problems. The progressive increase of population living in cities led to the aggravation of the pollution problem worldwide, especially in urban environments. Urban areas represent a strong source of pollution, through the roads, industrial production, wastewater, landfills, etc. It is expected that pollutants such as microplastics are transported diffusely from the sources through different pathways such as wind and rain. Therefore, it is very complex to quantify, control and treat these pollutants, designated current problematic issues by the European Commission. Green areas are pointed out by experts as natural filters for contaminants in cities, through their capacity of retention by vegetation. These spaces have thus the capacity to control the load of pollutants transported. This study investigates the spatial distribution of microplastics in urban soils of different land uses, their transport through atmospheric deposition, wind erosion, runoff and streams, as well as their deposition in vegetation like grass and tree leaves in urban environment. Coimbra, a medium large city located in the central Portugal, is the case-study. All the soil, sediments, water and vegetation samples were collected in Coimbra and were later analyzed in the Wageningen University & Research laboratory. Microplastics were extracted through the density separation using Sodium Phosphate as solution (~1.4 g cm−3) and filtration methods, visualized under a stereo microscope and identified using the u-FTIR method. Microplastic particles were found in all the different samples. In terms of soils, higher concentrations of microplastics were found in green parks, followed by landfills and industrial places, and the lowest concentrations in forests and pasture land-uses. Atmospheric deposition and streams after rainfall events seems to represent the strongest pathways of microplastics. Tree leaves can retain microplastics on their surfaces. Small leaves such as needle leaves seem to present higher amounts of microplastics per leaf area than bigger leaves. Rainfall episodes seem to reduce the concentration of microplastics on leaves surface, which suggests the wash of microplastics down to lower levels of the tree or to the soil. When in soil, different types of microplastics could be transported to the atmosphere through wind erosion. Grass seems to present high concentrations of microplastics, and the enlargement of the grass cover leads to a reduction of the amount of microplastics in soil, but also of the microplastics moved from the ground to the atmosphere by wind erosion. This study proof that vegetation can help to control the transport and dispersion of microplastics. In order to control the entry and the concentration of microplastics in the environment, especially in cities, it is essential to defining and evaluating nature-based land-use scenarios, considering the role of green urban areas in filtering small particles.

Keywords: microplastics, cities, sources, pathways, vegetation

Procedia PDF Downloads 59
178 Intelligent Cooperative Integrated System for Road Safety and Road Infrastructure Maintenance

Authors: Panagiotis Gkekas, Christos Sougles, Dionysios Kehagias, Dimitrios Tzovaras

Abstract:

This paper presents the architecture of the “Intelligent cooperative integrated system for road safety and road infrastructure maintenance towards 2020” (ODOS2020) advanced infrastructure, which implements a number of cooperative ITS applications based on Internet of Things and Infrastructure-to-Vehicle (V2I) technologies with the purpose to enhance the active road safety level of vehicles through the provision of a fully automated V2I environment. The primary objective of the ODOS2020 project is to contribute to increased road safety but also to the optimization of time for maintenance of road infrastructure. The integrated technological solution presented in this paper addresses all types of vehicles and requires minimum vehicle equipment. Thus, the ODOS2020 comprises a low-cost solution, which is one of its main benefits. The system architecture includes an integrated notification system to transmit personalized information on road, traffic, and environmental conditions, in order for the drivers to receive real-time and reliable alerts concerning upcoming critical situations. The latter include potential dangers on the road, such as obstacles or road works ahead, extreme environmental conditions, etc., but also informative messages, such as information on upcoming tolls and their charging policies. At the core of the system architecture lies an integrated sensorial network embedded in special road infrastructures (strips) that constantly collect and transmit wirelessly information about passing vehicles’ identification, type, speed, moving direction and other traffic information in combination with environmental conditions and road wear monitoring and predictive maintenance data. Data collected from sensors is transmitted by roadside infrastructure, which supports a variety of communication technologies such as ITS-G5 (IEEE-802.11p) wireless network and Internet connectivity through cellular networks (3G, LTE). All information could be forwarded to both vehicles and Traffic Management Centers (TMC) operators, either directly through the ITS-G5 network, or to smart devices with Internet connectivity, through cloud-based services. Therefore, through its functionality, the system could send personalized notifications/information/warnings and recommendations for upcoming events to both road users and TMC operators. In the course of the ODOS2020 project pilot operation has been conducted to allow drivers of both C-ITS equipped and non-equipped vehicles to experience the provided added value services. For non-equipped vehicles, the provided information is transmitted to a smartphone application. Finally, the ODOS2020 system and infrastructure is appropriate for installation on both urban, rural, and highway environments. The paper presents the various parts of the system architecture and concludes by outlining the various challenges that had to be overcome during its design, development, and deployment in a real operational environment. Acknowledgments: Work presented in this paper was co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation (call RESEARCH–CREATE–INNOVATE) under contract no. Τ1EDK-03081 (project ODOS2020).

Keywords: infrastructure to vehicle, intelligent transportation systems, internet of things, road safety

Procedia PDF Downloads 123
177 Propagation of Ultra-High Energy Cosmic Rays through Extragalactic Magnetic Fields: An Exploratory Study of the Distance Amplification from Rectilinear Propagation

Authors: Rubens P. Costa, Marcelo A. Leigui de Oliveira

Abstract:

The comprehension of features on the energy spectra, the chemical compositions, and the origins of Ultra-High Energy Cosmic Rays (UHECRs) - mainly atomic nuclei with energies above ~1.0 EeV (exa-electron volts) - are intrinsically linked to the problem of determining the magnitude of their deflections in cosmic magnetic fields on cosmological scales. In addition, as they propagate from the source to the observer, modifications are expected in their original energy spectra, anisotropy, and the chemical compositions due to interactions with low energy photons and matter. This means that any consistent interpretation of the nature and origin of UHECRs has to include the detailed knowledge of their propagation in a three-dimensional environment, taking into account the magnetic deflections and energy losses. The parameter space range for the magnetic fields in the universe is very large because the field strength and especially their orientation have big uncertainties. Particularly, the strength and morphology of the Extragalactic Magnetic Fields (EGMFs) remain largely unknown, because of the intrinsic difficulty of observing them. Monte Carlo simulations of charged particles traveling through a simulated magnetized universe is the straightforward way to study the influence of extragalactic magnetic fields on UHECRs propagation. However, this brings two major difficulties: an accurate numerical modeling of charged particles diffusion in magnetic fields, and an accurate numerical modeling of the magnetized Universe. Since magnetic fields do not cause energy losses, it is important to impose that the particle tracking method conserve the particle’s total energy and that the energy changes are results of the interactions with background photons only. Hence, special attention should be paid to computational effects. Additionally, because of the number of particles necessary to obtain a relevant statistical sample, the particle tracking method must be computationally efficient. In this work, we present an analysis of the propagation of ultra-high energy charged particles in the intergalactic medium. The EGMFs are considered to be coherent within cells of 1 Mpc (mega parsec) diameter, wherein they have uniform intensities of 1 nG (nano Gauss). Moreover, each cell has its field orientation randomly chosen, and a border region is defined such that at distances beyond 95% of the cell radius from the cell center smooth transitions have been applied in order to avoid discontinuities. The smooth transitions are simulated by weighting the magnetic field orientation by the particle's distance to the two nearby cells. The energy losses have been treated in the continuous approximation parameterizing the mean energy loss per unit path length by the energy loss length. We have shown, for a particle with the typical energy of interest the integration method performance in the relative error of Larmor radius, without energy losses and the relative error of energy. Additionally, we plotted the distance amplification from rectilinear propagation as a function of the traveled distance, particle's magnetic rigidity, without energy losses, and particle's energy, with energy losses, to study the influence of particle's species on these calculations. The results clearly show when it is necessary to use a full three-dimensional simulation.

Keywords: cosmic rays propagation, extragalactic magnetic fields, magnetic deflections, ultra-high energy

Procedia PDF Downloads 127
176 Health Reforms in Central and Eastern European Countries: Results, Dynamics, and Outcomes Measure

Authors: Piotr Romaniuk, Krzysztof Kaczmarek, Adam Szromek

Abstract:

Background: A number of approaches to assess the performance of health system have been proposed so far. Nonetheless, they lack a consensus regarding the key components of assessment procedure and criteria of evaluation. The WHO and OECD have developed methods of assessing health system to counteract the underlying issues, but they are not free of controversies and did not manage to produce a commonly accepted consensus. The aim of the study: On the basis of WHO and OECD approaches we decided to develop own methodology to assess the performance of health systems in Central and Eastern European countries. We have applied the method to compare the effects of health systems reforms in 20 countries of the region, in order to evaluate the dynamic of changes in terms of health system outcomes.Methods: Data was collected from a 25-year time period after the fall of communism, subsetted into different post-reform stages. Datasets collected from individual countries underwent one-, two- or multi-dimensional statistical analyses, and the Synthetic Measure of health system Outcomes (SMO) was calculated, on the basis of the method of zeroed unitarization. A map of dynamics of changes over time across the region was constructed. Results: When making a comparative analysis of the tested group in terms of the average SMO value throughout the analyzed period, we noticed some differences, although the gaps between individual countries were small. The countries with the highest SMO were the Czech Republic, Estonia, Poland, Hungary and Slovenia, while the lowest was in Ukraine, Russia, Moldova, Georgia, Albania, and Armenia. Countries differ in terms of the range of SMO value changes throughout the analyzed period. The dynamics of change is high in the case of Estonia and Latvia, moderate in the case of Poland, Hungary, Czech Republic, Croatia, Russia and Moldova, and small when it comes to Belarus, Ukraine, Macedonia, Lithuania, and Georgia. This information reveals fluctuation dynamics of the measured value in time, yet it does not necessarily mean that in such a dynamic range an improvement appears in a given country. In reality, some of the countries moved from on the scale with different effects. Albania decreased the level of health system outcomes while Armenia and Georgia made progress, but lost distance to leaders in the region. On the other hand, Latvia and Estonia showed the most dynamic progress in improving the outcomes. Conclusions: Countries that have decided to implement comprehensive health reform have achieved a positive result in terms of further improvements in health system efficiency levels. Besides, a higher level of efficiency during the initial transition period generally positively determined the subsequent value of the efficiency index value, but not the dynamics of change. The paths of health system outcomes improvement are highly diverse between different countries. The instrument we propose constitutes a useful tool to evaluate the effectiveness of reform processes in post-communist countries, but more studies are needed to identify factors that may determine results obtained by individual countries, as well as to eliminate the limitations of methodology we applied.

Keywords: health system outcomes, health reforms, health system assessment, health system evaluation

Procedia PDF Downloads 290
175 A Constructionist View of Projects, Social Media and Tacit Knowledge in a College Classroom: An Exploratory Study

Authors: John Zanetich

Abstract:

Designing an educational activity that encourages inquiry and collaboration is key to engaging students in meaningful learning. Educational Information and Communications Technology (EICT) plays an important role in facilitating cooperative and collaborative learning in the classroom. The EICT also facilitates students’ learning and development of the critical thinking skills needed to solve real world problems. Projects and activities based on constructivism encourage students to embrace complexity as well as find relevance and joy in their learning. It also enhances the students’ capacity for creative and responsible real-world problem solving. Classroom activities based on constructivism offer students an opportunity to develop the higher–order-thinking skills of defining problems and identifying solutions. Participating in a classroom project is an activity for both acquiring experiential knowledge and applying new knowledge to practical situations. It also provides an opportunity for students to integrate new knowledge into a skill set using reflection. Classroom projects can be developed around a variety of learning objects including social media, knowledge management and learning communities. The construction of meaning through project-based learning is an approach that encourages interaction and problem-solving activities. Projects require active participation, collaboration and interaction to reach the agreed upon outcomes. Projects also serve to externalize the invisible cognitive and social processes taking place in the activity itself and in the student experience. This paper describes a classroom project designed to elicit interactions by helping students to unfreeze existing knowledge, to create new learning experiences, and then refreeze the new knowledge. Since constructivists believe that students construct their own meaning through active engagement and participation as well as interactions with others. knowledge management can be used to guide the exchange of both tacit and explicit knowledge in interpersonal interactions between students and guide the construction of meaning. This paper uses an action research approach to the development of a classroom project and describes the use of technology, social media and the active use of tacit knowledge in the college classroom. In this project, a closed group Facebook page becomes the virtual classroom where interaction is captured and measured using engagement analytics. In the virtual learning community, the principles of knowledge management are used to identify the process and components of the infrastructure of the learning process. The project identifies class member interests and measures student engagement in a learning community by analyzing regular posting on the Facebook page. These posts are used to foster and encourage interactions, reflect a student’s interest and serve as reaction points from which viewers of the post convert the explicit information in the post to implicit knowledge. The data was collected over an academic year and was provided, in part, by the Google analytic reports on Facebook and self-reports of posts by members. The results support the use of active tacit knowledge activities, knowledge management and social media to enhance the student learning experience and help create the knowledge that will be used by students to construct meaning.

Keywords: constructivism, knowledge management, tacit knowledge, social media

Procedia PDF Downloads 215
174 Global Winners versus Local Losers: Globalization Identity and Tradition in Spanish Club Football

Authors: Jim O'brien

Abstract:

Contemporary global representation and consumption of La Liga across a plethora of media platform outlets has resulted in significant implications for the historical, political and cultural developments which shaped the development of Spanish club football. This has established and reinforced a hierarchy of a small number of teams belonging to or aspiring to belong to a cluster of global elite clubs seeking to imitate the blueprint of the English Premier League in respect of corporate branding and marketing in order to secure a global fan base through success and exposure in La Liga itself and through the Champions League. The synthesis between globalization, global sport and the status of high profile clubs has created radical change within the folkloric iconography of Spanish football. The main focus of this paper is to critically evaluate the consequences of globalization on the rich tapestry at the core of the game’s distinctive history in Spain. The seminal debate underpinning the study considers whether the divergent aspects of globalization have acted as a malevolent force, eroding tradition, causing financial meltdown and reducing much of the fabric of club football to the status of by standers, or have promoted a renaissance of these traditions, securing their legacies through new fans and audiences. The study draws on extensive sources on the history, politics and culture of Spanish football, in both English and Spanish. It also uses primary and archive material derived from interviews and fieldwork undertaken with scholars, media professionals and club representatives in Spain. The paper has four main themes. Firstly, it contextualizes the key historical, political and cultural forces which shaped the landscape of Spanish football from the late nineteenth century. The seminal notions of region, locality and cultural divergence are pivotal to this discourse. The study then considers the relationship between football, ethnicity and identity as a barometer of continuity and change, suggesting that tradition is being reinvented and re-framed to reflect the shifting demographic and societal patterns within the Spanish state. Following on from this, consideration is given to the paradoxical function of ‘El Clasico’ and the dominant duopoly of the FC Barcelona – Real Madrid axis in both eroding tradition in the global nexus of football’s commodification and in protecting historic political rivalries. To most global consumers of La Liga, the mega- spectacle and hyperbole of ‘El Clasico’ is the essence of Spanish football, with cultural misrepresentation and distortion catapulting the event to the global media audience. Finally, the paper examines La Liga as a sporting phenomenon in which elite clubs, cult managers and galacticos serve as commodities on the altar of mass consumption in football’s global entertainment matrix. These processes accentuate a homogenous mosaic of cultural conformity which obscures local, regional and national identities and paradoxically fuses the global with the local to maintain the distinctive hue of La Liga, as witnessed by the extraordinary successes of Athletico Madrid and FC Eibar in recent seasons.

Keywords: Spanish football, globalization, cultural identity, tradition, folklore

Procedia PDF Downloads 301
173 Bridging Educational Research and Policymaking: The Development of Educational Think Tank in China

Authors: Yumei Han, Ling Li, Naiqing Song, Xiaoping Yang, Yuping Han

Abstract:

Educational think tank is agreeably regarded as significant part of a nation’s soft power to promote the scientific and democratic level of educational policy making, and it plays critical role of bridging educational research in higher institutions and educational policy making. This study explores the concept, functions and significance of educational think tank in China, and conceptualizes a three dimensional framework to analyze the approaches of transforming research-based higher institutions into effective educational think tanks to serve educational policy making in the nation wide. Since 2014, the Ministry of Education P.R. China has been promoting the strategy of developing new type of educational think tanks in higher institutions, and such a strategy has been put into the agenda for the 13th Five Year Plan for National Education Development released in 2017.In such context, increasing scholars conduct studies to put forth strategies of promoting the development and transformation of new educational think tanks to serve educational policy making process. Based on literature synthesis, policy text analysis, and analysis of theories about policy making process and relationship between educational research and policy-making, this study constructed a three dimensional conceptual framework to address the following questions: (a) what are the new features of educational think tanks in the new era comparing traditional think tanks, (b) what are the functional objectives of the new educational think tanks, (c) what are the organizational patterns and mechanism of the new educational think tanks, (d) in what approaches traditional research-based higher institutions can be developed or transformed into think tanks to effectively serve the educational policy making process. The authors adopted case study approach on five influential education policy study centers affiliated with top higher institutions in China and applied the three dimensional conceptual framework to analyze their functional objectives, organizational patterns as well as their academic pathways that researchers use to contribute to the development of think tanks to serve education policy making process.Data was mainly collected through interviews with center administrators, leading researchers and academic leaders in the institutions. Findings show that: (a) higher institution based think tanks mainly function for multi-level objectives, providing evidence, theoretical foundations, strategies, or evaluation feedbacks for critical problem solving or policy-making on the national, provincial, and city/county level; (b) higher institution based think tanks organize various types of research programs for different time spans to serve different phases of policy planning, decision making, and policy implementation; (c) in order to transform research-based higher institutions into educational think tanks, the institutions must promote paradigm shift that promotes issue-oriented field studies, large data mining and analysis, empirical studies, and trans-disciplinary research collaborations; and (d) the five cases showed distinguished features in their way of constructing think tanks, and yet they also exposed obstacles and challenges such as independency of the think tanks, the discourse shift from academic papers to consultancy report for policy makers, weakness in empirical research methods, lack of experience in trans-disciplinary collaboration. The authors finally put forth implications for think tank construction in China and abroad.

Keywords: education policy-making, educational research, educational think tank, higher institution

Procedia PDF Downloads 158
172 Municipalities as Enablers of Citizen-Led Urban Initiatives: Possibilities and Constraints

Authors: Rosa Nadine Danenberg

Abstract:

In recent years, bottom-up urban development has started growing as an alternative to conventional top-down planning. In large proportions, citizens and communities initiate small-scale interventions; suddenly seeming to form a trend. As a result, more and more cities are witnessing not only the growth of but also an interest in these initiatives, as they bear the potential to reshape urban spaces. Such alternative city-making efforts cause new dynamics in urban governance, with inevitable consequences for the controlled city planning and its administration. The emergence of enabling relationships between top-down and bottom-up actors signals an increasingly common urban practice. Various case studies show that an enabling relationship is possible, yet, how it can be optimally realized stays rather underexamined. Therefore, the seemingly growing worldwide phenomenon of ‘municipal bottom-up urban development’ necessitates an adequate governance structure. As such, the aim of this research is to contribute knowledge to how municipalities can enable citizen-led urban initiatives from a governance innovation perspective. Empirical case-study research in Stockholm and Istanbul, derived from interviews with founders of four citizen-led urban initiatives and one municipal representative in each city, provided valuable insights to possibilities and constraints for enabling practices. On the one hand, diverging outcomes emphasize the extreme oppositional features of both cases (Stockholm and Istanbul). Firstly, both cities’ characteristics are drastically different. Secondly, the ideologies and motifs for the initiatives to emerge vary widely. Thirdly, the major constraints for citizen-led urban initiatives to relate to the municipality are considerably different. Two types of municipality’s organizational structures produce different underlying mechanisms which demonstrate the constraints. The first municipal organizational structure is steered by bureaucracy (Stockholm). It produces an administrative division that brings up constraints such as the lack of responsibility, transparency and continuity by municipal representatives. The second structure is dominated by municipal politics and governmental hierarchy (Istanbul). It produces informality, lack of transparency and a fragmented civil society. In order to cope with the constraints produced by both types of organizational structures, the initiatives have adjusted their organization to the municipality’s underlying structures. On the other hand, this paper has in fact also come to a rather unifying conclusion. Interestingly, the suggested possibilities for an enabling relationship underline converging new urban governance arrangements. This could imply that for the two varying types of municipality’s organizational structures there is an accurate governance structure. Namely, the combination of a neighborhood council with a municipal guide, with allowance for the initiatives to adopt a politicizing attitude is found as coinciding. Especially its combination appears key to redeem varying constraints. A municipal guide steers the initiatives through bureaucratic struggles, is supported by coproduction methods, while it balances out municipal politics. Next, a neighborhood council, that is politically neutral and run by local citizens, can function as an umbrella for citizen-led urban initiatives. What is crucial is that it should cater for a more entangled relationship between municipalities and initiatives with enhanced involvement of the initiatives in decision-making processes and limited involvement of prevailing constraints pointed out in this research.

Keywords: bottom-up urban development, governance innovation, Istanbul, Stockholm

Procedia PDF Downloads 219
171 Sustainable Crop Production: Greenhouse Gas Management in Farm Value Chain

Authors: Aswathaman Vijayan, Manish Jha, Ullas Theertha

Abstract:

Climate change and Global warming have become an issue for both developed and developing countries and perhaps the biggest threat to the environment. We at ITC Limited believe that a company’s performance must be measured by its Triple Bottom Line contribution to building economic, social and environmental capital. This Triple Bottom Line strategy focuses on - Embedding sustainability in business practices, Investing in social development and Adopting a low carbon growth path with a cleaner environment approach. The Agri Business Division - ILTD operates in the tobacco crop growing regions of Andhra Pradesh and Karnataka province of India. The Agri value chain of the company comprises of two distinct phases: First phase is Agricultural operations undertaken by ITC trained farmers and the second phase is Industrial operations which include marketing and processing of the agricultural produce. This research work covers the Greenhouse Gas (GHG) management strategy of ITC in the Agricultural operations undertaken by the farmers. The agriculture sector adds considerably to global GHG emissions through the use of carbon-based energies, use of fertilizers and other farming operations such as ploughing. In order to minimize the impact of farming operations on the environment, ITC has a taken a big leap in implementing system and process in reducing the GHG impact in farm value chain by partnering with the farming community. The company has undertaken a unique three-pronged approach for GHG management at the farm value chain: 1) GHG inventory at farm value chain: Different sources of GHG emission in the farm value chain were identified and quantified for the baseline year, as per the IPCC guidelines for greenhouse gas inventories. The major sources of emission identified are - emission due to nitrogenous fertilizer application during seedling production and main-field; emission due to diesel usage for farm machinery; emission due to fuel consumption and due to burning of crop residues. 2) Identification and implementation of technologies to reduce GHG emission: Various methodologies and technologies were identified for each GHG emission source and implemented at farm level. The identified methodologies are – reducing the consumption of chemical fertilizer usage at the farm through site-specific nutrient recommendation; Usage of sharp shovel for land preparation to reduce diesel consumption; implementation of energy conservation technologies to reduce fuel requirement and avoiding burning of crop residue by incorporation in the main field. These identified methodologies were implemented at farm level, and the GHG emission was quantified to understand the reduction in GHG emission. 3) Social and farm forestry for CO2 sequestration: In addition, the company encouraged social and farm forestry in the waste lands to convert it into green cover. The plantations are carried out with fast growing trees viz., Eucalyptus, Casuarina, and Subabul at the rate of 10,000 Ha of land per year. The above approach minimized considerable amount of GHG emission at the farm value chain benefiting farmers, community, and environment at a whole. In addition, the CO₂ stock created by social and farm forestry program has made the farm value chain to become environment-friendly.

Keywords: CO₂ sequestration, farm value chain, greenhouse gas, ITC limited

Procedia PDF Downloads 295
170 Quality of Life Among People with Mental Illness Attending a Psychiatric Outpatient Clinic in Ethiopia: A Structural Equation Model

Authors: Wondale Getinet Alemu, Lillian Mwanri, Clemence Due, Telake Azale, Anna Ziersch

Abstract:

Background: Mental illness is one of the most severe, chronic, and disabling public health problems that affect patients' Quality of life (QoL). Improving the QoL for people with mental illness is one of the most critical steps in stopping disease progression and avoiding complications of mental illness. Therefore, we aimed to assess the QoL and its determinants in patients with mental illness in outpatient clinics in Northwest Ethiopia in 2023. Methods: A facility-based cross-sectional study was conducted among people with mental illness in an outpatient clinic in Ethiopia. The sampling interval was decided by dividing the total number of study participants who had a follow-up appointment during the data collection period (2400) by the total sample size of 638, with the starting point selected by lottery method. The interviewer-administered WHOQOL BREF-26 tool was used to measure the QoL of people with mental illness. The domains and Health-Related Quality of Life (HRQoL) were identified. The indirect and direct effects of variables were calculated using structural equation modeling with SPSS-28 and Amos-28 software. A p-value of < 0.05 and a 95% CI were used to evaluate statistical significance. Results: A total of 636 (99.7%) participants responded and completed the WHOQOL-BREF questionnaire. The mean score of overall HRQoL of people with mental illness in the outpatient clinic was (49.6 ± 10 Sd). The highest QoL was found in the physical health domain (50.67 ±9.5 Sd), and the lowest mean QoL was found in the psychological health domain (48.41±10 Sd). Rural residents, drug nonadherence, suicidal ideation, not getting counseling, moderate or severe subjective severity, the family does not participate in patient care, and a family history of mental illness had an indirect negative effect on HRQoL. Alcohol use and psychological health domain had a direct positive effect on QoL. Furthermore, objective severity of illness, having low self-esteem, and having a history of mental illness in the family had both direct and indirect effects on QoL. Furthermore, sociodemographic factors (residence, educational status, marital status), social support-related factors (self-esteem, family not participating in patient care), substance use factors (alcohol use, tobacco use,) and clinical factors (objective and subjective severity of illness, not getting counseling, suicidal ideation, number of episodes, comorbid illness, family history of mental illness, poor drug adherence) directly and indirectly affected QoL. Conclusions: In this study, the QoL of people with mental illness was poor, with the psychological health domain being the most affected. Sociodemographic factors, social support-related factors, drug use factors, and clinical factors directly and indirectly, affect QoL through the mediator variables of physical health domains, psychological health domains, social relation health domains, and environmental health domains. In order to improve the QoL of people with mental illnesses, we recommend that emphasis be given to addressing the scourge of mental health, including the development of policy and practice drivers that address the above-identified factors.

Keywords: quality of life, mental wellbeing, mental illness, mental disorder, Ethiopia

Procedia PDF Downloads 80
169 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 148
168 The Development of Congeneric Elicited Writing Tasks to Capture Language Decline in Alzheimer Patients

Authors: Lise Paesen, Marielle Leijten

Abstract:

People diagnosed with probable Alzheimer disease suffer from an impairment of their language capacities; a gradual impairment which affects both their spoken and written communication. Our study aims at characterising the language decline in DAT patients with the use of congeneric elicited writing tasks. Within these tasks, a descriptive text has to be written based upon images with which the participants are confronted. A randomised set of images allows us to present the participants with a different task on every encounter, thus allowing us to avoid a recognition effect in this iterative study. This method is a revision from previous studies, in which participants were presented with a larger picture depicting an entire scene. In order to create the randomised set of images, existing pictures were adapted following strict criteria (e.g. frequency, AoA, colour, ...). The resulting data set contained 50 images, belonging to several categories (vehicles, animals, humans, and objects). A pre-test was constructed to validate the created picture set; most images had been used before in spoken picture naming tasks. Hence the same reaction times ought to be triggered in the typed picture naming task. Once validated, the effectiveness of the descriptive tasks was assessed. First, the participants (n=60 students, n=40 healthy elderly) performed a typing task, which provided information about the typing speed of each individual. Secondly, two descriptive writing tasks were carried out, one simple and one complex. The simple task contains 4 images (1 animal, 2 objects, 1 vehicle) and only contains elements with high frequency, a young AoA (<6 years), and fast reaction times. Slow reaction times, a later AoA (≥ 6 years) and low frequency were criteria for the complex task. This task uses 6 images (2 animals, 1 human, 2 objects and 1 vehicle). The data were collected with the keystroke logging programme Inputlog. Keystroke logging tools log and time stamp keystroke activity to reconstruct and describe text production processes. The data were analysed using a selection of writing process and product variables, such as general writing process measures, detailed pause analysis, linguistic analysis, and text length. As a covariate, the intrapersonal interkey transition times from the typing task were taken into account. The pre-test indicated that the new images lead to similar or even faster reaction times compared to the original images. All the images were therefore used in the main study. The produced texts of the description tasks were significantly longer compared to previous studies, providing sufficient text and process data for analyses. Preliminary analysis shows that the amount of words produced differed significantly between the healthy elderly and the students, as did the mean length of production bursts, even though both groups needed the same time to produce their texts. However, the elderly took significantly more time to produce the complex task than the simple task. Nevertheless, the amount of words per minute remained comparable between simple and complex. The pauses within and before words varied, even when taking personal typing abilities (obtained by the typing task) into account.

Keywords: Alzheimer's disease, experimental design, language decline, writing process

Procedia PDF Downloads 274
167 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 262
166 Baseline Data for Insecticide Resistance Monitoring in Tobacco Caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) on Cole Crops

Authors: Prabhjot Kaur, B.K. Kang, Balwinder Singh

Abstract:

The tobacco caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) is an agricultural important pest species. S. litura has a wide host range of approximately recorded 150 plant species worldwide. In Punjab, this pest attains sporadic status primarily on cauliflower, Brassica oleracea (L.). This pest destroys vegetable crop and particularly prefers the cruciferae family. However, it is also observed feeding on other crops such as arbi, Colocasia esculenta (L.), mung bean, Vigna radiata (L.), sunflower, Helianthus annuus (L.), cotton, Gossypium hirsutum (L.), castor, Ricinus communis (L.), etc. Larvae of this pest completely devour the leaves of infested plant resulting in huge crop losses which ranges from 50 to 70 per cent. Indiscriminate and continuous use of insecticides has contributed in development of insecticide resistance in insects and caused the environmental degradation as well. Moreover, a base line data regarding the toxicity of the newer insecticides would help in understanding the level of resistance developed in this pest and any possible cross-resistance there in, which could be assessed in advance. Therefore, present studies on development of resistance in S. litura against four new chemistry insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) were carried out in the Toxicology laboratory, Department of Entomology, Punjab Agricultural University, Ludhiana, Punjab, India during the year 2011-12. Various stages of S. litura (eggs, larvae) were collected from four different locations (Malerkotla, Hoshiarpur, Amritsar and Samrala) of Punjab. Resistance is developed in third instars of lepidopterous pests. Therefore, larval bioassays were conducted to estimate the response of field populations of thirty third-instar larvae of S. litura under laboratory conditions at 25±2°C and 65±5 per cent relative humidity. Leaf dip bioassay technique with diluted insecticide formulations recommended by Insecticide Resistance Action Committee (IRAC) was performed in the laboratory with seven to ten treatments depending on the insecticide class, respectively. LC50 values were estimated by probit analysis after correction to record control mortality data which was used to calculate the resistance ratios (RR). The LC50 values worked out for emamectin benzoate, chlorantraniliprole, indoxacarb, spinosad are 0.081, 0.088, 0.380, 4.00 parts per million (ppm) against pest populations collected from Malerkotla; 0.051, 0.060, 0.250, 3.00 (ppm) of Amritsar; 0.002, 0.001, 0.0076, 0.10 ppm for Samrala and 0.000014, 0.00001, 0.00056, 0.003 ppm against pest population of Hoshiarpur, respectively. The LC50 values for populations collected from these four locations were in the order Malerkotla>Amritsar>Samrala>Hoshiarpur for the insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) tested. Based on LC50 values obtained, emamectin benzoate (0.000014 ppm) was found to be the most toxic among all the tested populations, followed by chlorantraniliprole (0.00001 ppm), indoxacarb (0.00056 ppm) and spinosad (0.003 ppm), respectively. The pairwise correlation coefficients of LC50 values indicated that there was lack of cross resistance for emamectin benzoate, chlorantraniliprole, spinosad, indoxacarb in populations of S. litura from Punjab. These insecticides may prove to be promising substitutes for the effective control of insecticide resistant populations of S. litura in Punjab state, India.

Keywords: Spodoptera litura, insecticides, toxicity, resistance

Procedia PDF Downloads 342
165 Comparisons of Drop Jump and Countermovement Jump Performance for Male Basketball Players with and without Low-Dye Taping Application

Authors: Chung Yan Natalia Yeung, Man Kit Indy Ho, Kin Yu Stan Chan, Ho Pui Kipper Lam, Man Wah Genie Tong, Tze Chung Jim Luk

Abstract:

Excessive foot pronation is a well-known risk factor of knee and foot injuries such as patellofemoral pain, patellar and Achilles tendinopathy, and plantar fasciitis. Low-Dye taping (LDT) application is not uncommon for basketball players to control excessive foot pronation for pain control and injury prevention. The primary potential benefits of using LDT include providing additional supports to medial longitudinal arch and restricting the excessive midfoot and subtalar motion in weight-bearing activities such as running and landing. Meanwhile, restrictions provided by the rigid tape may also potentially limit functional joint movements and sports performance. Coaches and athletes need to weigh the potential benefits and harmful effects before making a decision if applying LDT technique is worthwhile or not. However, the influence of using LDT on basketball-related performance such as explosive and reactive strength is not well understood. Therefore, the purpose of this study was to investigate the change of drop jump (DJ) and countermovement jump (CMJ) performance before and after LDT application for collegiate male basketball players. In this within-subject crossover study, 12 healthy male basketball players (age: 21.7 ± 2.5 years) with at least 3-year regular basketball training experience were recruited. Navicular drop (ND) test was adopted as the screening and only those with excessive pronation (ND ≥ 10mm) were included. Participants with recent lower limb injury history were excluded. Recruited subjects were required to perform both ND, DJ (on a platform of 40cm height) and CMJ (without arms swing) tests in series during taped and non-taped conditions in the counterbalanced order. Reactive strength index (RSI) was calculated by using the flight time divided by the ground contact time measured. For DJ and CMJ tests, the best of three trials was used for analysis. The difference between taped and non-taped conditions for each test was further calculated through standardized effect ± 90% confidence intervals (CI) with clinical magnitude-based inference (MBI). Paired samples T-test showed significant decrease in ND (-4.68 ± 1.44mm; 95% CI: -3.77, -5.60; p < 0.05) while MBI demonstrated most likely beneficial and large effect (standardize effect: -1.59 ± 0.27) in LDT condition. For DJ test, significant increase in both flight time (25.25 ± 29.96ms; 95% CI: 6.22, 44.28; p < 0.05) and RSI (0.22 ± 0.22; 95% CI: 0.08, 0.36; p < 0.05) were observed. In taped condition, MBI showed very likely beneficial and moderate effect (standardized effect: 0.77 ± 0.49) in flight time, possibly beneficial and small effect (standardized effect: -0.26 ± 0.29) in ground contact time and very likely beneficial and moderate effect (standardized effect: 0.77 ± 0.42) in RSI. No significant difference in CMJ was observed (95% CI: -2.73, 2.08; p > 0.05). For basketball players with pes planus, applying LDT could substantially support the foot by elevating the navicular height and potentially provide acute beneficial effects in reactive strength performance. Meanwhile, no significant harmful effect on CMJ was observed. Basketball players may consider applying LDT before the game or training to enhance the reactive strength performance. However since the observed effects in this study could not generalize to other players without excessive foot pronation, further studies on players with normal foot arch or navicular height are recommended.

Keywords: flight time, pes planus, pronated foot, reactive strength index

Procedia PDF Downloads 155
164 Application of the Carboxylate Platform in the Consolidated Bioconversion of Agricultural Wastes to Biofuel Precursors

Authors: Sesethu G. Njokweni, Marelize Botes, Emile W. H. Van Zyl

Abstract:

An alternative strategy to the production of bioethanol is by examining the degradability of biomass in a natural system such as the rumen of mammals. This anaerobic microbial community has higher cellulolytic activities than microbial communities from other habitats and degrades cellulose to produce volatile fatty acids (VFA), methane and CO₂. VFAs have the potential to serve as intermediate products for electrochemical conversion to hydrocarbon fuels. In vitro mimicking of this process would be more cost-effective than bioethanol production as it does not require chemical pre-treatment of biomass, a sterile environment or added enzymes. The strategies of the carboxylate platform and the co-cultures of a bovine ruminal microbiota from cannulated cows were combined in order to investigate and optimize the bioconversion of agricultural biomass (apple and grape pomace, citrus pulp, sugarcane bagasse and triticale straw) to high value VFAs as intermediates for biofuel production in a consolidated bioprocess. Optimisation of reactor conditions was investigated using five different ruminal inoculum concentrations; 5,10,15,20 and 25% with fixed pH at 6.8 and temperature at 39 ˚C. The ANKOM 200/220 fiber analyser was used to analyse in vitro neutral detergent fiber (NDF) disappearance of the feedstuffs. Fresh and cryo-frozen (5% DMSO and 50% glycerol for 3 months) rumen cultures were tested for the retainment of fermentation capacity and durability in 72 h fermentations in 125 ml serum vials using a FURO medical solutions 6-valve gas manifold to induce anaerobic conditions. Fermentation of apple pomace, triticale straw, and grape pomace showed no significant difference (P > 0.05) in the effect of 15 and 20 % inoculum concentrations for the total VFA yield. However, high performance liquid chromatographic separation within the two inoculum concentrations showed a significant difference (P < 0.05) in acetic acid yield, with 20% inoculum concentration being the optimum at 4.67 g/l. NDF disappearance of 85% in 96 h and total VFA yield of 11.5 g/l in 72 h (A/P ratio = 2.04) for apple pomace entailed that it was the optimal feedstuff for this process. The NDF disappearance and VFA yield of DMSO (82% NDF disappearance and 10.6 g/l VFA) and glycerol (90% NDF disappearance and 11.6 g/l VFA) stored rumen also showed significantly similar degradability of apple pomace with lack of treatment effect differences compared to a fresh rumen control (P > 0.05). The lack of treatment effects was a positive sign in indicating that there was no difference between the stored samples and the fresh rumen control. Retaining of the fermentation capacity within the preserved cultures suggests that its metabolic characteristics were preserved due to resilience and redundancy of the rumen culture. The amount of degradability and VFA yield within a short span was similar to other carboxylate platforms that have longer run times. This study shows that by virtue of faster rates and high extent of degradability, small scale alternatives to bioethanol such as rumen microbiomes and other natural fermenting microbiomes can be employed to enhance the feasibility of biofuels large-scale implementation.

Keywords: agricultural wastes, carboxylate platform, rumen microbiome, volatile fatty acids

Procedia PDF Downloads 130
163 Regulation Effect of Intestinal Microbiota by Fermented Processing Wastewater of Yuba

Authors: Ting Wu, Feiting Hu, Xinyue Zhang, Shuxin Tang, Xiaoyun Xu

Abstract:

As a by-product of yuba, processing wastewater of Yuba (PWY) contains many bioactive components such as soybean isoflavones, soybean polysaccharides and soybean oligosaccharides, which is a good source of prebiotics and has a potential of high value utilization. The use of Lactobacillus plantarum to ferment PWY can be considered as a potential biogenic element, which can regulate the balance of intestinal microbiota. In this study, firstly, Lactobacillus plantarum was used to ferment PWY to improve its content of active components and antioxidant activity. Then, the health effect of fermented processing wastewater of yuba (FPWY) was measured in vitro. Finally, microencapsulation technology was used applied to improve the sustained release of FPWY and reduce the loss of active components in the digestion process, as well as to improving the activity of FPWY. The main results are as follows: (1) FPWY presented a good antioxidant capacity with DPPH free radical scavenging ability (0.83 ± 0.01 mmol Trolox/L), ABTS free radical scavenging ability (7.47 ± 0.35 mmol Trolox/L) and iron ion reducing ability (1.11 ± 0.07 mmol Trolox/L). Compared with non-fermented processing wastewater of yuba (NFPWY), there was no significant difference in the content of total soybean isoflavones, but the content of glucoside soybean isoflavones decreased, and aglyconic soybean isoflavones increased significantly. After fermentation, PWY can effectively reduce the soluble monosaccharides, disaccharides and oligosaccharides, such as glucose, fructose, galactose, trehalose, stachyose, maltose, raffinose and sucrose. (2) FPWY can significantly enhance the growth of beneficial bacteria such as Bifidobacterium, Ruminococcus and Akkermansia, significantly inhibit the growth of harmful bacteria E.coli, regulate the structure of intestinal microbiota, and significantly increase the content of short-chain fatty acids such as acetic acid, propionic acid, butyric acid, isovaleric acid. Higher amount of lactic acid in the gut can be further broken down into short chain fatty acids. (3) In order to improve the stability of soybean isoflavones in FPWY during digestion, sodium alginate and chitosan were used as wall materials for embedding. The FPWY freeze-dried powder was embedded by the method of acute-coagulation bath. The results show that when the core wall ratio is 3:1, the concentration of chitosan is 1.5%, the concentration of sodium alginate is 2.0%, and the concentration of calcium is 3%, the embossing rate is 53.20%. In the simulated in vitro digestion stage, the release rate of microcapsules reached 59.36% at the end of gastric digestion and 82.90% at the end of intestinal digestion. Therefore, the core materials with good sustained-release performance of microcapsules were almost all released. The structural analysis results of FPWY microcapsules show that the microcapsules have good mechanical properties. Its hardness, springness, cohesiveness, gumminess, chewiness and resilience were 117.75± 0.21 g, 0.76±0.02, 0.54±0.01, 63.28±0.71 g·sec, 48.03±1.37 g·sec, 0.31±0.01, respectively. Compared with the unembedded FPWY, the infrared spectrum results showed that the microcapsules had embedded effect on the FPWY freeze-dried powder.

Keywords: processing wastewater of yuba, lactobacillus plantarum, intestinal microbiota, microcapsule

Procedia PDF Downloads 76
162 Foregrounding Events in Modern Sundanese: The Pragmatics of Particle-to-Active Voice Marking Shift

Authors: Rama Munajat

Abstract:

Discourse information levels may be viewed from either a background-foreground distinction or a multi-level perspective, and cross-linguistic studies on this area suggest that each information level is marked by a specific linguistic device. In this sense, Sundanese, spoken in Indonesia’s West Javanese Province, further differentiates the background and foreground information into ordinary and significant types. This paper will report an ongoing shift from particle-to-active voice marking in the way Sundanese signals foregrounding events. The shift relates to decades of contact with Bahasa Indonesia (Indonesia’s official language) and linguistic compatibility between the two surface marking strategies. Representative data analyzed include three groups of short stories in both Sundanese and Bahasa Indonesia (Indonesian) published in three periods: before 1945, 1965-2006, and 2016-2019. In the first group of Sundanese data, forward-moving events dominantly appear in particle KA (Kecap Anteuran, word-accompanying) constructions, where the KA represents different particles that co-occur with a special group of verbs. The second group, however, shows that the foregrounded events are more frequently described in active-voice forms with a subject-predicate (SP) order. Subsequently, the third offers stronger evidence for the use of the SP structure. As for the Indonesian data, the foregrounding events in the first group occur in verb-initial and passive-voice constructions, while in the second and third, the events more frequently appear in active-voice structures (subject-predicate sequence). The marking shift above suggests a structural influence from Indonesian, stemmed from generational differences among authors of the Sundanese short stories, particularly related to their education and language backgrounds. The first group of short stories – published before 1945 or before Indonesia's independence from Dutch – were written by native speakers of Sundanese who spoke Indonesian as a foreign language and went through the Dutch education system. The second group of authors, on the other hand, represents a generation of Sundanese native speakers who spoke Indonesian as a second language. Finally, the third group consists of authors who are bilingual speakers of both Sundanese and Indonesian. The data suggest that the last two groups of authors completed the Indonesian education system. With these, the use of subject-predicate sequences to denote foregrounding events began to appear more frequently in the second group and then became more dominant in those of the third. The coded data also signify that cohesion, coherence, and pragmatic purposes in Particle KA constructions are intact in their respective active-voice structure counterparts. For instance, the foregrounding events in Particle KA constructions occur in Sentence-initial KA and Pre-verbal KA forms, whereas those in the active-voice are described in Subject-Predicate (SP) and Zero-Subject active-voice patterns. Cross-language data further demonstrate that the Sentence-initial KA and the SP active-voice structures each contain an overt noun phrase (NP) co-referential with one of the entities introduced in a preceding context. Similarly, the pre-verbal KA and Zero-Subject active-voice patterns have a deleted noun phrase unambiguously referable to the only one entity previously mentioned. The presence and absence of an NP inform a pragmatic strategy to place prominence on topic/given and comment/new information, respectively.

Keywords: discourse analysis, foregrounding marking, pragmatics, language contact

Procedia PDF Downloads 138
161 Artificial Intelligence in Management Simulators

Authors: Nuno Biga

Abstract:

Artificial Intelligence (AI) has the potential to transform management into several impactful ways. It allows machines to interpret information to find patterns in big data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-driven decision making. The introduction of an 'artificial brain' in organization also enables learning through complex information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) sensitive to context, that provides users useful suggestions to pursue the following operations such as: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time existing bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of data is powered in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" (VA) that players can use during the Game. Each participant in the VA permanently asks himself about the decisions he should make during the game to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making, through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, as they gain a better understanding of the issues along time, reflect on good practice and rely on their own experience, capability and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator designated as “Serious Game Controller” (SGC) is responsible for supporting the players with further analysis. The recommended actions by the SGC may differ or be similar to the ones previously provided by the VA, ensuring a higher degree of robustness in decision-making. Additionally, all the information should be jointly analyzed and assessed by each player, who are expected to add “Emotional Intelligence”, an essential component absent from the machine learning process.

Keywords: artificial intelligence, gamification, key performance indicators, machine learning, management simulators, serious games, virtual assistant

Procedia PDF Downloads 104