Search results for: sequential update extended Kalman filter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2590

Search results for: sequential update extended Kalman filter

100 A Corpus-based Study of Adjuncts in Colombian English as a Second Language (ESL) Argumentative Essays

Authors: E. Velasco

Abstract:

Meeting high standards of writing in a Second Language (L2) is extremely important for many students who wish to undertake studies at universities in both English and non-English speaking countries. University lecturers in English speaking countries continue to express dissatisfaction with the apparent poor quality of essay writing skills displayed by English as a Second Language (ESL) students, whose essays are often criticised for their lack of cohesion and coherence. These critiques have extended to contexts such as Colombia, where many ESL students are criticised for their inability to write high-quality academic texts in L2-English, particularly at the tertiary level. If Colombian ESL students are expected to meet high standards of writing when studying locally and abroad, it makes sense to carry out specific research that can perhaps lead to recommendations to support their quest for improving argumentative strategies. Employing Corpus Linguistics methods within a Learner Corpus Research framework, and a combination of Log-Likelihood and Bayes Factor measures, this paper investigated argumentative essays written by Colombian ESL students. The study specifically aimed to analyse conjunctive adjuncts in argumentative essays to find out how Colombian ESL students connect their ideas in discourse. Results suggest that a) Colombian ESL learners need explicit instruction on specific areas of conjunctive adjuncts to counteract overuse, underuse and misuse; b) underuse of endophoric and evidential adjuncts highlights gaps between IELTS-like essays and good quality tertiary-level essays and published papers, and these gaps are linked to prior knowledge brought into writing task, rhetorical functions in writing, and research processes before writing takes place; c) both Colombian ESL learners and L1-English writers (in a reference corpus) overuse some adjuncts and underuse endophoric and evidential adjuncts, when compared to skilled L1-English and L2-English writers, so differences in frequencies of adjuncts has little to do with the writers’ L1, and differences are rather linked to types of essays writers produce (e.g. ESL vs. university essays). Ender Velasco: The pedagogical recommendations deriving from the study are that: a) Colombian ESL learners need to be shown that overuse is not the only way of giving cohesion to argumentative essays and there are other alternatives to cohesion (e.g., implicit adjuncts, lexical chains and collocations); b) syllabi and classroom input need to raise awareness of gaps in writing skills between IELTS-like and tertiary-level argumentative essays, and of how endophoric and evidential adjuncts are used to refer to anaphoric and cataphoric sections of essays, and to other people’s work or ideas; c) syllabi and classroom input need to include essay-writing tasks based on previous research/reading which learners need to incorporate into their arguments, and tasks that raise awareness of referencing systems (e.g., APA); d) classroom input needs to include explicit instruction on use of punctuation, functions and/or syntax with specific conjunctive adjuncts such as for example, for that reason, although, despite and nevertheless.

Keywords: argumentative essays, colombian english as a second language (esl) learners, conjunctive adjuncts, corpus linguistics

Procedia PDF Downloads 45
99 Concealing Breast Cancer Status: A Qualitative Study in India

Authors: Shradha Parsekar, Suma Nair, Ajay Bailey, Binu V. S.

Abstract:

Background: Concealing of cancer-related information is seen in many low-and-middle-income countries and may be associated with multiple factors. Comparatively, there is lack of information about, how breast cancers diagnosed women disclose cancer-related information to their social contacts and vice versa. To get more insights on the participant’s experience, opinions, expectations, and attitudes, a qualitative study is a suitable approach. Therefore, this study involving in-depth interviews was planned to lessen this gap. Methods: Interviews were conducted separately among breast cancer patients and their caregivers with semi-structured qualitative interview guide. Purposive and convenient sampling was being used to recruit patients and caregivers, respectively. Ethical clearance and permission from the tertiary hospital were obtained and participants were selected from the Udupi district, Karnataka, India. After obtaining a list of breast cancer diagnosed cases, participants were contacted in person and their willingness to take part in the study was taken. About 39 caregivers and 35 patients belonging to different breast cancer stages were recruited. Interviews were recorded with prior permission. Data was managed by Atlas.ti 8 software. The recordings were transcribed, translated and coded in two cycles. Most of the patients belonged to stage II and III cancer. Codes were grouped together into to whom breast cancer status was concealed to and underneath reason for the same. Main findings: followings are the codes and code families which emerged from the data. 1) Concealing the breast cancer status from social contacts other than close family members (such as extended family, neighbor and friends). Participants perceived the reasons as, a) to avoid questions which people probe (which doesn’t have answers), b) to avoid people paying courtesy visit (to inquire about the health as it is Indian culture to visit the sick person) making it inconvenient for patient and caregivers have to offer something and talk to them, c) to avoid people getting shocked (react as if cancer is different from other diseases) or getting emotional/sad, or getting fear of death d) to avoid getting negative suggestion or talking anything in front of patient as it may affect patient negatively, e) to avoid getting stigmatized, f) to avoid getting obstacle in child’s marriage. 2) Participant concealed the breast cancer status of young children as they perceived that it may a) affect studies, b) affect emotionally, c) children may get scared. 3) Concealing the breast cancer status from patients as the caregivers perceived that they have fear of a) worsening patient’s health, b) patient getting tensed, c) patient getting shocked, and d) patient getting scared. However, some participants stressed important in disclosing the cancer status to social contact/patient to make the people aware of the disease. Conclusion: The news of breast cancer spreads like electricity in the wire, therefore, patient or family avoid it for many reasons. Although, globally, due to physicians’ ethical obligations, there is an inclination towards more disclosure of cancer diagnosis and status of prognosis to the patient. However, it is an ongoing argument whether patient/social contacts should know the status especially in a country like India.

Keywords: breast cancer, concealing cancer status, India, qualitative study

Procedia PDF Downloads 111
98 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach

Authors: Jianli Jiang, Bai-Chen Xie

Abstract:

The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.

Keywords: spatial network DEA, environmental efficiency, sustainable development, power system

Procedia PDF Downloads 73
97 Catchment Nutrient Balancing Approach to Improve River Water Quality: A Case Study at the River Petteril, Cumbria, United Kingdom

Authors: Nalika S. Rajapaksha, James Airton, Amina Aboobakar, Nick Chappell, Andy Dyer

Abstract:

Nutrient pollution and their impact on water quality is a key concern in England. Many water quality issues originate from multiple sources of pollution spread across the catchment. The river water quality in England has improved since 1990s and wastewater effluent discharges into rivers now contain less phosphorus than in the past. However, excess phosphorus is still recognised as the prevailing issue for rivers failing Water Framework Directive (WFD) good ecological status. To achieve WFD Phosphorus objectives, Wastewater Treatment Works (WwTW) permit limits are becoming increasingly stringent. Nevertheless, in some rural catchments, the apportionment of Phosphorus pollution can be greater from agricultural runoff and other sources such as septic tanks. Therefore, the challenge of meeting the requirements of watercourses to deliver WFD objectives often goes beyond water company activities, providing significant opportunities to co-deliver activities in wider catchments to reduce nutrient load at source. The aim of this study was to apply the United Utilities' Catchment Systems Thinking (CaST) strategy and pilot an innovative permitting approach - Catchment Nutrient Balancing (CNB) in a rural catchment in Cumbria (the River Petteril) in collaboration with the regulator and others to achieve WFD objectives and multiple benefits. The study area is mainly agricultural land, predominantly livestock farms. The local ecology is impacted by significant nutrient inputs which require intervention to meet WFD obligations. There are a range of Phosphorus inputs into the river, including discharges from wastewater assets but also significantly from agricultural contributions. Solely focusing on the WwTW discharges would not have resolved the problem hence in order to address this issue effectively, a CNB trial was initiated at a small WwTW, targeting the removal of a total of 150kg of Phosphorus load, of which 13kg were to be reduced through the use of catchment interventions. Various catchment interventions were implemented across selected farms in the upstream of the catchment and also an innovative polonite reactive filter media was implemented at the WwTW as an alternative to traditional Phosphorus treatment methods. During the 3 years of this trial, the impact of the interventions in the catchment and the treatment works were monitored. In 2020 and 2022, it respectively achieved a 69% and 63% reduction in the phosphorus level in the catchment against the initial reduction target of 9%. Phosphorus treatment at the WwTW had a significant impact on overall load reduction. The wider catchment impact, however, was seven times greater than the initial target when wider catchment interventions were also established. While it is unlikely that all the Phosphorus load reduction was delivered exclusively from the interventions implemented though this project, this trial evidenced the enhanced benefits that can be achieved with an integrated approach, that engages all sources of pollution within the catchment - rather than focusing on a one-size-fits-all solution. Primarily, the CNB approach and the act of collaboratively engaging others, particularly the agriculture sector is likely to yield improved farm and land management performance and better compliance, which can lead to improved river quality as well as wider benefits.

Keywords: agriculture, catchment nutrient balancing, phosphorus pollution, water quality, wastewater

Procedia PDF Downloads 34
96 Microfungi on Sandy Beaches: Potential Threats for People Enjoying Lakeside Recreation

Authors: Tomasz Balabanski, Anna Biedunkiewicz

Abstract:

Research on basic bacteriological and physicochemical parameters conducted by state institutions (Provincial Sanitary and Epidemiological Station and District Sanitary and Epidemiological Station) are limited to bathing waters under constant sanitary and epidemiological supervision. Unfortunately, no routine or monitoring tests are carried out for the presence of microfungi. This also applies to beach sand used for recreational purposes. The purpose of the planned own research was to determine the diversity of the mycobiota present on supervised and unsupervised sandy beaches, on the shores of lakes, of municipal baths used for recreation. The research material consisted of microfungi isolated from April to October 2019 from sandy beaches of supervised and unsupervised lakes located within the administrative boundaries of the city of Olsztyn (North-Eastern Poland, Europe). Four lakes, out of the fifteen available (Tyrsko, Kortowskie, Skanda, and Ukiel), whose bathing waters are subjected to routine bacteriological tests, were selected for testing. To compare the diversity of the mycobiota composition on the surface and below the sand mixing layer, samples were taken from two depths (10 cm and 50 cm), using a soil auger. Micro-fungi from sand samples were obtained by surface inoculation on an RBC medium from the 1st dilution (1:10). After incubation at 25°C for 96-144 h, the average number of CFU/dm³ was counted. Morphologically differing yeast colonies were passaged into Sabouraud agar slants with gentamicin and incubated again. For detailed laboratory analyses, culture methods (macro- and micro-cultures) and identification methods recommended in diagnostic mycological laboratories were used. The conducted research allowed obtaining 140 yeast isolates. The total average population ranged from 1.37 × 10⁻² CFU/dm³ before the bathing season (April 2019), 1.64 × 10⁻³ CFU/dm³ in the season (May-September 2019), and 1.60 × 10⁻² CFU/dm³ after the end of the season (October 2019). More microfungi were obtained from the surface layer of sand (100 isolates) than from the deeper layer (40 isolates). Reported microfungi may circulate seasonally between individual elements of the lake ecosystem. From the sand/soil from the catchment area beaches, they can get into bathing waters, stopping periodically on the coastal phyllosphere. The sand of the beaches and the phyllosphere are a kind of filter for the water reservoir. The presence of microfungi with various pathogenicity potential in these places is of major epidemiological importance. Therefore, full monitoring of not only recreational waters but also sandy beaches should be treated as an element of constant control by appropriate supervisory institutions, allowing recreational areas for public use so that the use of these places does not involve the risk of infection. Acknowledgment: 'Development Program of the University of Warmia and Mazury in Olsztyn', POWR.03.05.00-00-Z310/17, co-financed by the European Union under the European Social Fund from the Operational Program Knowledge Education Development. Tomasz Bałabański is a recipient of a scholarship from the Programme Interdisciplinary Doctoral Studies in Biology and Biotechnology (POWR.03.05.00-00-Z310/17), which is funded by the 'European Social Fund'.

Keywords: beach, microfungi, sand, yeasts

Procedia PDF Downloads 75
95 Detection of Patient Roll-Over Using High-Sensitivity Pressure Sensors

Authors: Keita Nishio, Takashi Kaburagi, Yosuke Kurihara

Abstract:

Recent advances in medical technology have served to enhance average life expectancy. However, the total time for which the patients are prescribed complete bedrest has also increased. With patients being required to maintain a constant lying posture- also called bedsore- development of a system to detect patient roll-over becomes imperative. For this purpose, extant studies have proposed the use of cameras, and favorable results have been reported. Continuous on-camera monitoring, however, tends to violate patient privacy. We have proposed unconstrained bio-signal measurement system that could detect body-motion during sleep and does not violate patient’s privacy. Therefore, in this study, we propose a roll-over detection method by the date obtained from the bi-signal measurement system. Signals recorded by the sensor were assumed to comprise respiration, pulse, body motion, and noise components. Compared the body-motion and respiration, pulse component, the body-motion, during roll-over, generate large vibration. Thus, analysis of the body-motion component facilitates detection of the roll-over tendency. The large vibration associated with the roll-over motion has a great effect on the Root Mean Square (RMS) value of time series of the body motion component calculated during short 10 s segments. After calculation, the RMS value during each segment was compared to a threshold value set in advance. If RMS value in any segment exceeded the threshold, corresponding data were considered to indicate occurrence of a roll-over. In order to validate the proposed method, we conducted experiment. A bi-directional microphone was adopted as a high-sensitivity pressure sensor and was placed between the mattress and bedframe. Recorded signals passed through an analog Band-pass Filter (BPF) operating over the 0.16-16 Hz bandwidth. BPF allowed the respiration, pulse, and body-motion to pass whilst removing the noise component. Output from BPF was A/D converted with the sampling frequency 100Hz, and the measurement time was 480 seconds. The number of subjects and data corresponded to 5 and 10, respectively. Subjects laid on a mattress in the supine position. During data measurement, subjects—upon the investigator's instruction—were asked to roll over into four different positions—supine to left lateral, left lateral to prone, prone to right lateral, and right lateral to supine. Recorded data was divided into 48 segments with 10 s intervals, and the corresponding RMS value for each segment was calculated. The system was evaluated by the accuracy between the investigator’s instruction and the detected segment. As the result, an accuracy of 100% was achieved. While reviewing the time series of recorded data, segments indicating roll-over tendencies were observed to demonstrate a large amplitude. However, clear differences between decubitus and the roll-over motion could not be confirmed. Extant researches possessed a disadvantage in terms of patient privacy. The proposed study, however, demonstrates more precise detection of patient roll-over tendencies without violating their privacy. As a future prospect, decubitus estimation before and after roll-over could be attempted. Since in this paper, we could not confirm the clear differences between decubitus and the roll-over motion, future studies could be based on utilization of the respiration and pulse components.

Keywords: bedsore, high-sensitivity pressure sensor, roll-over, unconstrained bio-signal measurement

Procedia PDF Downloads 94
94 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 19
93 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 207
92 A Hybrid of BioWin and Computational Fluid Dynamics Based Modeling of Biological Wastewater Treatment Plants for Model-Based Control

Authors: Komal Rathore, Kiesha Pierre, Kyle Cogswell, Aaron Driscoll, Andres Tejada Martinez, Gita Iranipour, Luke Mulford, Aydin Sunol

Abstract:

Modeling of Biological Wastewater Treatment Plants requires several parameters for kinetic rate expressions, thermo-physical properties, and hydrodynamic behavior. The kinetics and associated mechanisms become complex due to several biological processes taking place in wastewater treatment plants at varying times and spatial scales. A dynamic process model that incorporated the complex model for activated sludge kinetics was developed using the BioWin software platform for an Advanced Wastewater Treatment Plant in Valrico, Florida. Due to the extensive number of tunable parameters, an experimental design was employed for judicious selection of the most influential parameter sets and their bounds. The model was tuned using both the influent and effluent plant data to reconcile and rectify the forecasted results from the BioWin Model. Amount of mixed liquor suspended solids in the oxidation ditch, aeration rates and recycle rates were adjusted accordingly. The experimental analysis and plant SCADA data were used to predict influent wastewater rates and composition profiles as a function of time for extended periods. The lumped dynamic model development process was coupled with Computational Fluid Dynamics (CFD) modeling of the key units such as oxidation ditches in the plant. Several CFD models that incorporate the nitrification-denitrification kinetics, as well as, hydrodynamics was developed and being tested using ANSYS Fluent software platform. These realistic and verified models developed using BioWin and ANSYS were used to plan beforehand the operating policies and control strategies for the biological wastewater plant accordingly that further allows regulatory compliance at minimum operational cost. These models, with a little bit of tuning, can be used for other biological wastewater treatment plants as well. The BioWin model mimics the existing performance of the Valrico Plant which allowed the operators and engineers to predict effluent behavior and take control actions to meet the discharge limits of the plant. Also, with the help of this model, we were able to find out the key kinetic and stoichiometric parameters which are significantly more important for modeling of biological wastewater treatment plants. One of the other important findings from this model were the effects of mixed liquor suspended solids and recycle ratios on the effluent concentration of various parameters such as total nitrogen, ammonia, nitrate, nitrite, etc. The ANSYS model allowed the abstraction of information such as the formation of dead zones increases through the length of the oxidation ditches as compared to near the aerators. These profiles were also very useful in studying the behavior of mixing patterns, effect of aerator speed, and use of baffles which in turn helps in optimizing the plant performance.

Keywords: computational fluid dynamics, flow-sheet simulation, kinetic modeling, process dynamics

Procedia PDF Downloads 172
91 Modelling the Art Historical Canon: The Use of Dynamic Computer Models in Deconstructing the Canon

Authors: Laura M. F. Bertens

Abstract:

There is a long tradition of visually representing the art historical canon, in schematic overviews and diagrams. This is indicative of the desire for scientific, ‘objective’ knowledge of the kind (seemingly) produced in the natural sciences. These diagrams will, however, always retain an element of subjectivity and the modelling methods colour our perception of the represented information. In recent decades visualisations of art historical data, such as hand-drawn diagrams in textbooks, have been extended to include digital, computational tools. These tools significantly increase modelling strength and functionality. As such, they might be used to deconstruct and amend the very problem caused by traditional visualisations of the canon. In this paper, the use of digital tools for modelling the art historical canon is studied, in order to draw attention to the artificial nature of the static models that art historians are presented with in textbooks and lectures, as well as to explore the potential of digital, dynamic tools in creating new models. To study the way diagrams of the canon mediate the represented information, two modelling methods have been used on two case studies of existing diagrams. The tree diagram Stammbaum der neudeutschen Kunst (1823) by Ferdinand Olivier has been translated to a social network using the program Visone, and the famous flow chart Cubism and Abstract Art (1936) by Alfred Barr has been translated to an ontological model using Protégé Ontology Editor. The implications of the modelling decisions have been analysed in an art historical context. The aim of this project has been twofold. On the one hand the translation process makes explicit the design choices in the original diagrams, which reflect hidden assumptions about the Western canon. Ways of organizing data (for instance ordering art according to artist) have come to feel natural and neutral and implicit biases and the historically uneven distribution of power have resulted in underrepresentation of groups of artists. Over the last decades, scholars from fields such as Feminist Studies, Postcolonial Studies and Gender Studies have considered this problem and tried to remedy it. The translation presented here adds to this deconstruction by defamiliarizing the traditional models and analysing the process of reconstructing new models, step by step, taking into account theoretical critiques of the canon, such as the feminist perspective discussed by Griselda Pollock, amongst others. On the other hand, the project has served as a pilot study for the use of digital modelling tools in creating dynamic visualisations of the canon for education and museum purposes. Dynamic computer models introduce functionalities that allow new ways of ordering and visualising the artworks in the canon. As such, they could form a powerful tool in the training of new art historians, introducing a broader and more diverse view on the traditional canon. Although modelling will always imply a simplification and therefore a distortion of reality, new modelling techniques can help us get a better sense of the limitations of earlier models and can provide new perspectives on already established knowledge.

Keywords: canon, ontological modelling, Protege Ontology Editor, social network modelling, Visone

Procedia PDF Downloads 101
90 India’s Foreign Policy toward its South Asian Neighbors: Retrospect and Prospect

Authors: Debasish Nandy

Abstract:

India’s foreign policy towards all of her neighbor countries is determinate on the basis of multi-dimensional factors. India’s relations with its South Asian neighbor can be classified into three categories. In the first category, there are four countries -Sri Lanka, Bangladesh, Nepal, and Afghanistan- whose bilateral relationships have encompassed cooperation, irritants, problems and crisis at different points in time. With Pakistan, the relationship has been perpetually adversarial. The third category includes Bhutan and Maldives whose relations are marked by friendship and cooperation, free of any bilateral problems. It is needless to say that Jawaharlal Nehru emphasized on friendly relations with the neighboring countries. The subsequent Prime Ministers of India especially I.K. Gujral had advocated in making of peaceful and friendly relations with the subcontinental countries. He had given a unique idea to foster bilateral relations with the neighbors. His idea is known as ‘Gujral Doctrine’. A dramatical change has been witnessed in Indian foreign policy since 1991.In the post-Cold War period, India’s national security has been vehemently threatened by terrorism, which originated from Pakistan-Afghanistan and partly Bangladesh. India has required a cooperative security, which can be made by mutual understanding among the South Asian countries. Additionally, the countries of South Asia need to evolve the concept of ‘Cooperative Security’ to explain the underlying logic of regional cooperation. According to C. Rajamohan, ‘cooperative security could be understood, as policies of governments, which see themselves as former adversaries or potential adversaries to shift from or avoid confrontationist policies.’ A cooperative security essentially reflects a policy of dealing peacefully with conflicts, not merely by abstention from violence or threats but by active engagement in negotiation, a search for practical solutions and with a commitment to preventive measures. Cooperative assumes the existence of a condition in which the two sides possess the military capabilities to harm each other. Establishing cooperative security runs into a complex process building confidence. South Asian nations often engaged with hostility to each other. Extra-regional powers have been influencing their powers in this region since a long time. South Asian nations are busy to purchase military equipment. In spite of weakened economic systems, these states are spending a huge amount of money for their security. India is the big power in this region in every aspect. The big states- small states syndrome is a negative factor in this respect. However, India will have to an initiative to extended ‘track II diplomacy’ or soft diplomacy for its security as well as the security of this region.Confidence building measures could help rejuvenate not only SAARC but also build trust and mutual confidence between India and its neighbors in South Asia. In this paper, I will focus on different aspects of India’s policy towards it, South-Asian neighbors. It will also be searched that how India is dealing with these countries by using a mixed type of diplomacy – both idealistic and realistic points of view. Security and cooperation are two major determinants of India’s foreign policy towards its South Asian neighbors.

Keywords: bilateral, diplomacy, infiltration, terrorism

Procedia PDF Downloads 519
89 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 135
88 OpenFOAM Based Simulation of High Reynolds Number Separated Flows Using Bridging Method of Turbulence

Authors: Sagar Saroha, Sawan S. Sinha, Sunil Lakshmipathy

Abstract:

Reynolds averaged Navier-Stokes (RANS) model is the popular computational tool for prediction of turbulent flows. Being computationally less expensive as compared to direct numerical simulation (DNS), RANS has received wide acceptance in industry and research community as well. However, for high Reynolds number flows, the traditional RANS approach based on the Boussinesq hypothesis is incapacitated to capture all the essential flow characteristics, and thus, its performance is restricted in high Reynolds number flows of practical interest. RANS performance turns out to be inadequate in regimes like flow over curved surfaces, flows with rapid changes in the mean strain rate, duct flows involving secondary streamlines and three-dimensional separated flows. In the recent decade, partially averaged Navier-Stokes (PANS) methodology has gained acceptability among seamless bridging methods of turbulence- placed between DNS and RANS. PANS methodology, being a scale resolving bridging method, is inherently more suitable than RANS for simulating turbulent flows. The superior ability of PANS method has been demonstrated for some cases like swirling flows, high-speed mixing environment, and high Reynolds number turbulent flows. In our work, we intend to evaluate PANS in case of separated turbulent flows past bluff bodies -which is of broad aerodynamic research and industrial application. PANS equations, being derived from base RANS, continue to inherit the inadequacies from the parent RANS model based on linear eddy-viscosity model (LEVM) closure. To enhance PANS’ capabilities for simulating separated flows, the shortcomings of the LEVM closure need to be addressed. Inabilities of the LEVMs have inspired the development of non-linear eddy viscosity models (NLEVM). To explore the potential improvement in PANS performance, in our study we evaluate the PANS behavior in conjugation with NLEVM. Our work can be categorized into three significant steps: (i) Extraction of PANS version of NLEVM from RANS model, (ii) testing the model in the homogeneous turbulence environment and (iii) application and evaluation of the model in the canonical case of separated non-homogeneous flow field (flow past prismatic bodies and bodies of revolution at high Reynolds number). PANS version of NLEVM shall be derived and implemented in OpenFOAM -an open source solver. Homogeneous flows evaluation will comprise the study of the influence of the PANS’ filter-width control parameter on the turbulent stresses; the homogeneous analysis performed over typical velocity fields and asymptotic analysis of Reynolds stress tensor. Non-homogeneous flow case will include the study of mean integrated quantities and various instantaneous flow field features including wake structures. Performance of PANS + NLEVM shall be compared against the LEVM based PANS and LEVM based RANS. This assessment will contribute to significant improvement of the predictive ability of the computational fluid dynamics (CFD) tools in massively separated turbulent flows past bluff bodies.

Keywords: bridging methods of turbulence, high Re-CFD, non-linear PANS, separated turbulent flows

Procedia PDF Downloads 117
87 Multiparticulate SR Formulation of Dexketoprofen Trometamol by Wurster Coating Technique

Authors: Bhupendra G. Prajapati, Alpesh R. Patel

Abstract:

The aim of this research work is to develop sustained release multi-particulates dosage form of Dexketoprofen trometamol, which is the pharmacologically active isomer of ketoprofen. The objective is to utilization of active enantiomer with minimal dose and administration frequency, extended release multi-particulates dosage form development for better patience compliance was explored. Drug loaded and sustained release coated pellets were prepared by fluidized bed coating principle by wurster coater. Microcrystalline cellulose as core pellets, povidone as binder and talc as anti-tacking agents were selected during drug loading while Kollicoat SR 30D as sustained release polymer, triethyl citrate as plasticizer and micronized talc as an anti-adherent were used in sustained release coating. Binder optimization trial in drug loading showed that there was increase in process efficiency with increase in the binder concentration. 5 and 7.5%w/w concentration of Povidone K30 with respect to drug amount gave more than 90% process efficiency while higher amount of rejects (agglomerates) were observed for drug layering trial batch taken with 7.5% binder. So for drug loading, optimum Povidone concentration was selected as 5% of drug substance quantity since this trial had good process feasibility and good adhesion of the drug onto the MCC pellets. 2% w/w concentration of talc with respect to total drug layering solid mass shows better anti-tacking property to remove unnecessary static charge as well as agglomeration generation during spraying process. Optimized drug loaded pellets were coated for sustained release coating from 16 to 28% w/w coating to get desired drug release profile and results suggested that 22% w/w coating weight gain is necessary to get the required drug release profile. Three critical process parameters of Wurster coating for sustained release were further statistically optimized for desired quality target product profile attributes like agglomerates formation, process efficiency, and drug release profile using central composite design (CCD) by Minitab software. Results show that derived design space consisting 1.0 to 1.2 bar atomization air pressure, 7.8 to 10.0 gm/min spray rate and 29-34°C product bed temperature gave pre-defined drug product quality attributes. Scanning Image microscopy study results were also dictate that optimized batch pellets had very narrow particle size distribution and smooth surface which were ideal properties for reproducible drug release profile. The study also focused on optimized dexketoprofen trometamol pellets formulation retain its quality attributes while administering with common vehicle, a liquid (water) or semisolid food (apple sauce). Conclusion: Sustained release multi-particulates were successfully developed for dexketoprofen trometamol which may be useful to improve acceptability and palatability of a dosage form for better patient compliance.

Keywords: dexketoprofen trometamol, pellets, fluid bed technology, central composite design

Procedia PDF Downloads 110
86 Hybrid Renewable Energy Systems for Electricity and Hydrogen Production in an Urban Environment

Authors: Same Noel Ngando, Yakub Abdulfatai Olatunji

Abstract:

Renewable energy micro-grids, such as those powered by solar or wind energy, are often intermittent in nature. This means that the amount of energy generated by these systems can vary depending on weather conditions or other factors, which can make it difficult to ensure a steady supply of power. To address this issue, energy storage systems have been developed to increase the reliability of renewable energy micro-grids. Battery systems have been the dominant energy storage technology for renewable energy micro-grids. Batteries can store large amounts of energy in a relatively small and compact package, making them easy to install and maintain in a micro-grid setting. Additionally, batteries can be quickly charged and discharged, allowing them to respond quickly to changes in energy demand. However, the process involved in recycling batteries is quite costly and difficult. An alternative energy storage system that is gaining popularity is hydrogen storage. Hydrogen is a versatile energy carrier that can be produced from renewable energy sources such as solar or wind. It can be stored in large quantities at low cost, making it suitable for long-distance mass storage. Unlike batteries, hydrogen does not degrade over time, so it can be stored for extended periods without the need for frequent maintenance or replacement, allowing it to be used as a backup power source when the micro-grid is not generating enough energy to meet demand. When hydrogen is needed, it can be converted back into electricity through a fuel cell. Energy consumption data is got from a particular residential area in Daegu, South Korea, and the data is processed and analyzed. From the analysis, the total energy demand is calculated, and different hybrid energy system configurations are designed using HOMER Pro (Hybrid Optimization for Multiple Energy Resources) and MATLAB software. A techno-economic and environmental comparison and life cycle assessment (LCA) of the different configurations using battery and hydrogen as storage systems are carried out. The various scenarios included PV-hydrogen-grid system, PV-hydrogen-grid-wind, PV-hydrogen-grid-biomass, PV-hydrogen-wind, PV-hydrogen-biomass, biomass-hydrogen, wind-hydrogen, PV-battery-grid-wind, PV- battery -grid-biomass, PV- battery -wind, PV- battery -biomass, and biomass- battery. From the analysis, the least cost system for the location was the PV-hydrogen-grid system, with a net present cost of about USD 9,529,161. Even though all scenarios were environmentally friendly, taking into account the recycling cost and pollution involved in battery systems, all systems with hydrogen as a storage system produced better results. In conclusion, hydrogen is becoming a very prominent energy storage solution for renewable energy micro-grids. It is easier to store compared with electric power, so it is suitable for long-distance mass storage. Hydrogen storage systems have several advantages over battery systems, including flexibility, long-term stability, and low environmental impact. The cost of hydrogen storage is still relatively high, but it is expected to decrease as more hydrogen production, and storage infrastructure is built. With the growing focus on renewable energy and the need to reduce greenhouse gas emissions, hydrogen is expected to play an increasingly important role in the energy storage landscape.

Keywords: renewable energy systems, microgrid, hydrogen production, energy storage systems

Procedia PDF Downloads 62
85 Impact of Water Interventions under WASH Program in the South-west Coastal Region of Bangladesh

Authors: S. M. Ashikur Elahee, Md. Zahidur Rahman, Md. Shofiqur Rahman

Abstract:

This study evaluated the impact of different water interventions under WASH program on access of household's to safe drinking water. Following survey method, the study was carried out in two Upazila of South-west coastal region of Bangladesh namely Koyra from Khulna and Shymnagar from Satkhira district. Being an explanatory study, a total of 200 household's selected applying random sampling technique were interviewed using a structured interview schedule. The predicted probability suggests that around 62 percent household's are out of year-round access to safe drinking water whereby, only 25 percent household's have access at SPHERE standard (913 Liters/per person/per year). Besides, majority (78 percent) of the household's have not accessed at both indicators simultaneously. The distance from household residence to the water source varies from 0 to 25 kilometer with an average distance of 2.03 kilometers. The study also reveals that the increase in monthly income around BDT 1,000 leads to additional 11 liters (coefficient 0.01 at p < 0.1) consumption of safe drinking water for a person/year. As expected, lining up time has significant negative relationship with dependent variables i.e., for higher lining up time, the probability of getting access for both SPHERE standard and year round access variables becomes lower. According to ordinary least square (OLS) regression results, water consumption decreases at 93 liters for per person/year of a household if one member is added to that household. Regarding water consumption intensity, ordered logistic regression (OLR) model shows that one-minute increase of lining up time for water collection tends to reduce water consumption intensity. On the other hand, as per OLS regression results, for one-minute increase of lining up time, the water consumption decreases by around 8 liters. Considering access to Deep Tube Well (DTW) as a reference dummy, in OLR, the household under Pond Sand Filter (PSF), Shallow Tube Well (STW), Reverse Osmosis (RO) and Rainwater Harvester System (RWHS) are respectively 37 percent, 29 percent, 61 percent and 27 percent less likely to ensure year round access of water consumption. In line of health impact, different type of water born diseases like diarrhea, cholera, and typhoid are common among the coastal community caused by microbial impurities i.e., Bacteria, Protozoa. High turbidity and TDS in pond water caused by reduction of water depth, presence of suspended particle and inorganic salt stimulate the growth of bacteria, protozoa, and algae causes affecting health hazard. Meanwhile, excessive growth of Algae in pond water caused by excessive nitrate in drinking water adversely effects on child health. In lieu of ensuring access at SPHERE standard, we need to increase the number of water interventions at reasonable distance, preferably a half kilometer away from the dwelling place, ensuring community peoples involved with its installation process where collectively owned water intervention is found more effective than privately owned. In addition, a demand-responsive approach to supply of piped water should be adopted to allow consumer demand to guide investment in domestic water supply in future.

Keywords: access, impact, safe drinking water, Sphere standard, water interventions

Procedia PDF Downloads 193
84 Prevalence and Molecular Characterization of Extended-Spectrum–β Lactamase and Carbapenemase-Producing Enterobacterales from Tunisian Seafood

Authors: Mehdi Soula, Yosra Mani, Estelle Saras, Antoine Drapeau, Raoudha Grami, Mahjoub Aouni, Jean-Yves Madec, Marisa Haenni, Wejdene Mansour

Abstract:

Multi-resistance to antibiotics in gram-negative bacilli and particularly in enterobacteriaceae, has become frequent in hospitals in Tunisia. However, data on antibiotic resistant bacteria in aquatic products are scarce. The aims of this study are to estimate the proportion of ESBL- and carbapenemase-producing Enterobacterales in seafood (clams and fish) in Tunisia and to molecularly characterize the collected isolates. Two types of seafood were sampled in unrelated markets in four different regions in Tunisia (641 pieces of farmed fish and 1075 mediterranean clams divided into 215 pools, and each pool contained 5 pieces). Once purchased, all samples were incubated in tubes containing peptone salt broth for 24 to 48h at 37°C. After incubation, overnight cultures were isolated on selective MacConkey agar plates supplemented with either imipenem or cefotaxime, identified using API20E test strips (bioMérieux, Marcy-l’Étoile, France) and confirmed by Maldi-TOF MS. Antimicrobial susceptibility was determined by the disk diffusion method on Mueller-Hinton agar plates and results were interpreted according to CA-SFM 2021. ESBL-producing Enterobacterales were detected using the Double Disc Synergy Test (DDST). Carbapenem-resistance was detected using an ertapenem disk and was respectively confirmed using the ROSCO KPC/MBL and OXA-48 Confirm Kit (ROSCO Diagnostica, Taastrup, Denmark). DNA was extracted using a NucleoSpin Microbial DNA extraction kit (Macherey-Nagel, Hoerdt, France), according to the manufacturer’s instructions. Resistance genes were determined using the CGE online tools. The replicon content and plasmid formula were identified from the WGS data using PlasmidFinder 2.0.1 and pMLST 2.0. From farmed fishes, nine ESBL-producing strains (9/641, 1.4%) were isolated, which were identified as E. coli (n=6) and K. pneumoniae (n=3). Among the 215 pools of 5 clams analyzed, 18 ESBL-producing isolates were identified, including 14 E. coli and 4 K. pneumoniae. A low isolation rate of ESBL-producing Enterobacterales was detected 1.6% (18/1075) in clam pools. In fish, the ESBL phenotype was due to the presence of the blaCTX-M-15 gene in all nine isolates, but no carbapenemase gene was identified. In clams, the predominant ESBL phenotype was blaCTX-M-1 (n=6/18). blaCPE (NDM1, OXA48) was detected only in 3 isolates ‘K. pneumoniae isolates’. Replicon typing on the strains carring the ESBL and carbapenemase gene revelead that the major type plasmid carried ESBL were IncF (42.3%) [n=11/26]. In all, our results suggest that seafood can be a reservoir of multi-drug resistant bacteria, most probably of human origin but also by the selection pressure of antibiotic. Our findings raise concerns that seafood bought for consumption may serve as potential reservoirs of AMR genes and pose serious threat to public health.

Keywords: BLSE, carbapenemase, enterobacterales, tunisian seafood

Procedia PDF Downloads 76
83 Determination of the Presence of Antibiotic Resistance from Vibrio Species in Northern Italy

Authors: Tramuta Clara, Masotti Chiara, Pitti Monica, Adriano Daniela, Battistini Roberta, Serraca Laura, Decastelli Lucia

Abstract:

Oysters are considered filter organisms, and their raw consumption may increase health risks for consumers: it is often associated with outbreaks of gastroenteritis or enteric illnesses. Most of these foodborne diseases are caused by Vibrio strains, enteric pathogens also involved in the diffusion of genetic determinants of antibiotic resistance and their entrance along the food chain. The European Food Safety Authority (EFSA), during the European Union report on antimicrobial resistance in 2017, focused the attention about the role of food as a possible carrier of antibiotic-resistant bacteria or antibiotic-resistance genes that determine health risks for humans. This study wants to determine antibiotic resistance and antibiotic-resistance genes in Vibrio spp. isolated from Crassostrea gigas oysters collected in the Golfo della Spezia (Liguria, Italy). A total of 47 Vibrio spp. strains were isolated (ISO21872-2:2017) during the summer of 2021 from oysters of Crassostrea gigas. The strains were identified by MALDI-TOF (Bruker, Germany) mass spectrometry and tested for antibiotic susceptibility using a broth microdiluition method (ISO20776-1:2019) using Sensititre EUVSEC plates (Thermo-Fisher Scientific) to obtain the Minimum Inhibitory Concentration (MIC). The strains were tested with PCR-based biomolecular methods, according to previous works, to define the presence of 23 resistance genes of the main classes of antibiotics used in human and veterinary medicine: tet (B), tet (C), tet (D), tet (A), tet (E), tet (G ), tet (K), tet (L), tet (M), tet (O), tet (S) (tetracycline resistance); blaCTX-M, blaTEM, blaOXA, blaSHV (β-lactam resistance); mcr-1 and mcr-2 (colistin resistance); qnrA, qnrB, and qnrS (quinolone resistance); sul1, sul2 and sul3 (sulfonamide resistance). Six different species have been identified: V. alginolyticus 34% (n=16), V. harveyi 28% (n=13), V. fortis 15% (n=7), V. pelagius 8% (n=4), V. parahaemolyticus 11% (n=5) e V. chagasii 4% (n=2). The PCR assays showed the presence of the blaTEM gene on 40% of the strains (n=19). All the other genes were not detected, except for a V. alginolyticus positive for anrS gene. The broth microdiluition method results showed an high level of resistance for ciprofloxacin (62%; n=29), ampicillin (47%; n=22), and colistin (49%; n=23). Furthermore, 32% (n=15) of strains can be considered multiresistant bacteria for the simultaneous presence of resistance for three different antibiotic classes. Susceptibility towards meropenem, azithromycin, gentamicin, ceftazidime, cefotaxime, chloramphenicol, tetracycline and sulphamethoxazole reached 100%. The Vibrio species identified in this study are widespread in marine environments and can cause gastrointerstinal infections after the ingestion of raw fish products and bivalve molluscs. The level of resistance to antibiotics such as ampicillin, ciprofloxacin and colistin can be connected to anthropic factors (industrial, agricultural and domestic wastes) that promote the spread of resistance to these antibiotics. It can be also observed a strong correlation between phenotypic (resistant MIC) and genotypic (positive blaTEM gene) resistance for ampicillin on the same strains, probably due to the transfer of genetic material between bacterial strains. Consumption of raw bivalve molluscs can represent a risk for consumers heath due to the potentially presence of foodborne pathogens, highly resistant to different antibiotics and source of transferable antibiotic-resistant genes.

Keywords: vibrio species, blaTEM genes, antimicrobial resistance, PCR

Procedia PDF Downloads 46
82 Multi-Criteria Decision Making Network Optimization for Green Supply Chains

Authors: Bandar A. Alkhayyal

Abstract:

Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.

Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains

Procedia PDF Downloads 137
81 Regenerating Habitats. A Housing Based on Modular Wooden Systems

Authors: Rui Pedro de Sousa Guimarães Ferreira, Carlos Alberto Maia Domínguez

Abstract:

Despite the ambitions to achieve climate neutrality by 2050, to fulfill the Paris Agreement's goals, the building and construction sector remains one of the most resource-intensive and greenhouse gas-emitting industries in the world, accounting for 40% of worldwide CO ₂ emissions. Over the past few decades, globalization and population growth have led to an exponential rise in demand in the housing market and, by extension, in the building industry. Considering this housing crisis, it is obvious that we will not stop building in the near future. However, the transition, which has already started, is challenging and complex because it calls for the worldwide participation of numerous organizations in altering how building systems, which have been a part of our everyday existence for over a century, are used. Wood is one of the alternatives that is most frequently used nowadays (under responsible forestry conditions) because of its physical qualities and, most importantly, because it produces fewer carbon emissions during manufacturing than steel or concrete. Furthermore, as wood retains its capacity to store CO ₂ after application and throughout the life of the building, working as a natural carbon filter, it helps to reduce greenhouse gas emissions. After a century-long focus on other materials, in the last few decades, technological advancements have made it possible to innovate systems centered around the use of wood. However, there are still some questions that require further exploration. It is necessary to standardize production and manufacturing processes based on prefabrication and modularization principles to achieve greater precision and optimization of the solutions, decreasing building time, prices, and waste from raw materials. In addition, this approach will make it possible to develop new architectural solutions to solve the rigidity and irreversibility of buildings, two of the most important issues facing housing today. Most current models are still created as inflexible, fixed, monofunctional structures that discourage any kind of regeneration, based on matrices that sustain the conventional family's traditional model and are founded on rigid, impenetrable compartmentalization. Adaptability and flexibility in housing are, and always have been, necessities and key components of architecture. People today need to constantly adapt to their surroundings and themselves because of the fast-paced, disposable, and quickly obsolescent nature of modern items. Migrations on a global scale, different kinds of co-housing, or even personal changes are some of the new questions that buildings have to answer. Designing with the reversibility of construction systems and materials in mind not only allows for the concept of "looping" in construction, with environmental advantages that enable the development of a circular economy in the sector but also unleashes multiple social benefits. In this sense, it is imperative to develop prefabricated and modular construction systems able to address the formalization of a reversible proposition that adjusts to the scale of time and its multiple reformulations, many of which are unpredictable. We must allow buildings to change, grow, or shrink over their lifetime, respecting their nature and, finally, the nature of the people living in them. It´s the ability to anticipate the unexpected, adapt to social factors, and take account of demographic shifts in society to stabilize communities, the foundation of real innovative sustainability.

Keywords: modular, timber, flexibility, housing

Procedia PDF Downloads 29
80 NEOM Coast from Intertidal to Sabkha Systems: A Geological Overview

Authors: Mohamed Abouelresh, Subhajit Kumar, Lamidi Babalola, Septriandi Chan, Ali Al Musabeh A., Thadickal V. Joydas, Bruno Pulido

Abstract:

Neom has a relatively long coastline on the Red Sea and the Gulf of Aqaba, which is about 300 kilometres long, in addition to many naturally formed bays along the Red Sea coast. Undoubtedly, these coasts provide an excellent opportunity for tourism and other activities; however, these coastal areas host a wide range of salinity-dependent ecosystems that need to be protected. The main objective of the study was to identify the coastal features, including tidal flats and salt flats, along the NEOM coast. A base map of the study area generated from the satellite images contained the main landform features and, in particular, the boundaries of the inland and coastal sabkhas. A field survey was conducted to map and characterize the intertidal and sabkha landforms. The coastal and inner coastal areas of NEOM are mainly covered by the quaternary sediments, which include gravel sheets, terraces, raised reef limestone, evaporite successions, eolian dunes, and undifferentiated sand/gravel deposits (alluvium, alluvial outwash, wind-blown sand beach). There are different landforms that characterizes the NEOM coast, including rocky coast, tidal zone, and sabkha. Sabkha area ranges between a few to tens of square kilometers. Coastal sabkha extended across the shoreline of NEOM, specifically at Gayal and Sharma areas, while the continental sabkha only existed at Gayal Town. The inland Sabkha at Gayal is mainly composed of a thin (15-25 cm) evaporite crust composed of a dark brown, cavernous, rugged, pitted, colloidal salty sand layer with salt-tolerant vegetation. The inland Sabkha is considered a groundwater-driven sedimentary system as indicated by syndepositional intra-sediment capillary evaporites, which precipitate in both marine and continental salt flats. Gayal coastal Sabkha is made up of tidal inlets, tidal creeks, and lagoons followed in a landward direction with well-developed sabkha layers. The surface sediments of the coastal Sabkha are composed of unlithified calcareous, gypsiferous, coarse to medium sands, and silt with bioclastic fragments underlain by several organic-rich layers. The coastal flat is graded landward into widespread, flat vegetated Sabkhas dissected by tributaries of the fluvial system, which debouches to the Red Sea. The coast from Gayal to Magna through Ras El-Sheikh Humaid is continuously subjected to tidal flows, which create an intertidal depositional system. The intertidal flats at NEOM are extensive, nearly horizontal land forming a very dynamic system in which several physical, chemical, geomorphological, and biological processes are acting simultaneously. The current work provides a field-based identification of the coastal sabkha and intertidal sites at NEOM. However, the mutual interaction between tidal flows and sabkha development, particularly at Gayal, needs to be well understood through comprehensive field and lab analysis.

Keywords: coast, intertidal, deposition, sabkha

Procedia PDF Downloads 44
79 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 109
78 Concepts of Modern Design: A Study of Art and Architecture Synergies in Early 20ᵗʰ Century Europe

Authors: Stanley Russell

Abstract:

Until the end of the 19th century, European painting dealt almost exclusively with the realistic representation of objects and landscapes, as can be seen in the work of realist artists like Gustav Courbet. Architects of the day typically made reference to and recreated historical precedents in their designs. The curriculum of the first architecture school in Europe, The Ecole des Beaux Artes, based on the study of classical buildings, had a profound effect on the profession. Painting exhibited an increasing level of abstraction from the late 19th century, with impressionism, and the trend continued into the early 20th century when Cubism had an explosive effect sending shock waves through the art world that also extended into the realm of architectural design. Architect /painter Le Corbusier with “Purism” was one of the first to integrate abstract painting and building design theory in works that were equally shocking to the architecture world. The interrelationship of the arts, including architecture, was institutionalized in the Bauhaus curriculum that sought to find commonality between diverse art disciplines. Renowned painter and Bauhaus instructor Vassily Kandinsky was one of the first artists to make a semi-scientific analysis of the elements in “non-objective” painting while also drawing parallels between painting and architecture in his book Point and Line to plane. Russian constructivists made abstract compositions with simple geometric forms, and like the De Stijl group of the Netherlands, they also experimented with full-scale constructions and spatial explorations. Based on the study of historical accounts and original artworks, of Impressionism, Cubism, the Bauhaus, De Stijl, and Russian Constructivism, this paper begins with a thorough explanation of the art theory and several key works from these important art movements of the late 19th and early 20th century. Similarly, based on written histories and first-hand experience of built and drawn works, the author continues with an analysis of the theories and architectural works generated by the same groups, all of which actively pursued continuity between their art and architectural concepts. With images of specific works, the author shows how the trend toward abstraction and geometric purity in painting coincided with a similar trend in architecture that favored simple unornamented geometries. Using examples like the Villa Savoye, The Schroeder House, the Dessau Bauhaus, and unbuilt designs by Russian architect Chernikov, the author gives detailed examples of how the intersection of trends in Art and Architecture led to a unique and fruitful period of creative synergy when the same concepts that were used by artists to generate paintings were also used by architects in the making of objects, space, and buildings. In Conclusion, this article examines the extremely pivotal period in art and architecture history from the late 19th to early 20th century when the confluence of art and architectural theory led to many painted, drawn, and built works that continue to inspire architects and artists to this day.

Keywords: modern art, architecture, design methodologies, modern architecture

Procedia PDF Downloads 100
77 Regional Barriers and Opportunities for Developing Innovation Networks in the New Media Industry: A Comparison between Beijing and Bangalore Regional Innovation Systems

Authors: Cristina Chaminade, Mandar Kulkarni, Balaji Parthasarathy, Monica Plechero

Abstract:

The characteristics of a regional innovation system (RIS) and the specificity of the knowledge base of an industry may contribute to create peculiar paths for innovation and development of firms’ geographic extended innovation networks. However, the relative empirical evidence in emerging economies remains underexplored. The paper aims to fill the research gap by means of some recent qualitative research conducted in 2016 in Beijing (China) and Bangalore (India). It analyzes cases studies of firms in the new media industry, a sector that merges different IT competences with competences from other knowledge domains and that is emerging in those RIS. The results show that while in Beijing the new media sector results to be more in line with the existing institutional setting and governmental goals aimed at targeting specific social aspects and social problems of the population, in Bangalore it remains a more spontaneous firms-led process. In Beijing what matters for the development of innovation networks is the governmental setting and the national and regional strategies to promote science and technology in this sector, internet and mass innovation. The peculiarities of recent governmental policies aligned to the domestic goals may provide good possibilities for start-ups to develop innovation networks. However, due to the specificities of those policies targeting the Chinese market, networking outside the domestic market are not so promoted. Moreover, while some institutional peculiarities, such as a culture of collaboration in the region, may be favorable for local networking, regulations related to Internet censorship may limit the use of global networks particularly when based on virtual spaces. Mainly firms with already some foreign experiences and contact take advantage of global networks. In Bangalore, the role of government in pushing networking for the new media industry at the present stage is quite absent at all geographical levels. Indeed there is no particular strategic planning or prioritizing in the region toward the new media industry, albeit one industrial organization has emerged to represent the animation industry interests. This results in a lack of initiatives for sustaining the integration of complementary knowledge into the local portfolio of IT specialization. Firms actually involved in the new media industry face institutional constrains related to a poor level of local trust and cooperation, something that does not allow for full exploitation of local linkages. Moreover, knowledge-provider organizations in Bangalore remain still a solid base for the IT domain, but not for other domains. Initiatives to link to international networks seem therefore more the result of individual entrepreneurial actions aimed at acquiring complementary knowledge and competencies from different domains and exploiting potentiality in different markets. From those cases, it emerges that role of government, soft institutions and organizations in the two RIS differ substantially in the creation of barriers and opportunities for the development of innovation networks and their specific aim.

Keywords: regional innovation system, emerging economies, innovation network, institutions, organizations, Bangalore, Beijing

Procedia PDF Downloads 286
76 The Relevance of (Re)Designing Professional Paths with Unemployed Working-Age Adults

Authors: Ana Rodrigues, Maria Cadilhe, Filipa Ferreira, Claudia Pereira, Marta Santos

Abstract:

Professional paths must be understood in the multiplicity of their possible configurations. While some actors tend to represent their path as a harmonious succession of positions in the life cycle, most recognize the existence of unforeseen and uncontrollable bifurcations, caused, for example, by a work accident or by going through a period of unemployment. Considering the intensified challenges posed by the ongoing societal changes (e.g., technological and demographic), and looking at the Portuguese context, where the unemployment rate continues to be more evident in certain age groups, like in individuals aged 45 years or over, it is essential to support those adults by providing strategies capable of supporting them during professional transitions, being this a joint responsibility of governments, employers, workers, educational institutions, among others. Concerned about those issues, Porto City Council launched the challenge of designing and implementing a Lifelong Career Guidance program, which was answered with the presentation of a customized conceptual and operational model: groWing|Lifelong Career Guidance. A pilot project targeting working-age adults (35 or older) who were unemployed was carried out, aiming to support them to reconstruct their professional paths, through the recovery of their past experiences and through a reflection about dimensions such as skills, interests, constraints, and labor market. A research action approach was used to assess the proposed model, namely the perceived relevance of the theme and of the project, by adults themselves (N=44), employment professionals (N=15) and local companies (N=15), in an integrated manner. A set of activities were carried out: a train the trainer course and a monitoring session with employment professionals; collective and individual sessions with adults, including a monitoring session as well; and a workshop with local companies. Support materials for individual/collective reflection about professional paths were created and adjusted for each involved agent. An evaluation model was co-build by different stakeholders. Assessment was carried through a form created for the purpose, completed at the end of the different activities, which allowed us to collect quantitative and qualitative data. Statistical analysis was carried through SPSS software. Results showed that the participants, as well as the employment professionals and the companies involved, considered both the topic and the project as extremely relevant. Also, adults saw the project as an opportunity to reflect on their paths and become aware of the opportunities and the necessary conditions to achieve their goals; the professionals highlighted the support given by an integrated methodology and the existence of tools to assist the process; companies valued the opportunity to think about the topic and the possible initiatives they could implement within the company to diversify their recruitment pool. The results allow us to conclude that, in the local context under study, there is an alignment between different agents regarding the pertinence of supporting adults with work experience in professional transitions, seeing the project as a relevant strategy to address this issue, which justifies that it can be extended in time and to other working-age adults in the future.

Keywords: professional paths, research action, turning points, lifelong career guidance, relevance

Procedia PDF Downloads 62
75 Application of Flow Cytometry for Detection of Influence of Abiotic Stress on Plants

Authors: Dace Grauda, Inta Belogrudova, Alexei Katashev, Linda Lancere, Isaak Rashal

Abstract:

The goal of study was the elaboration of easy applicable flow cytometry method for detection of influence of abiotic stress factors on plants, which could be useful for detection of environmental stresses in urban areas. The lime tree Tillia vulgaris H. is a popular tree species used for urban landscaping in Europe and is one of the main species of street greenery in Riga, Latvia. Tree decline and low vitality has observed in the central part of Riga. For this reason lime trees were select as a model object for the investigation. During the period of end of June and beginning of July 12 samples from different urban environment locations as well as plant material from a greenhouse were collected. BD FACSJazz® cell sorter (BD Biosciences, USA) with flow cytometer function was used to test viability of plant cells. The method was based on changes of relative fluorescence intensity of cells in blue laser (488 nm) after influence of stress factors. SpheroTM rainbow calibration particles (3.0–3.4 μm, BD Biosciences, USA) in phosphate buffered saline (PBS) were used for calibration of flow cytometer. BD PharmingenTM PBS (BD Biosciences, USA) was used for flow cytometry assays. The mean fluorescence intensity information from the purified cell suspension samples was recorded. Preliminary, multiple gate sizes and shapes were tested to find one with the lowest CV. It was found that low CV can be obtained if only the densest part of plant cells forward scatter/side scatter profile is analysed because in this case plant cells are most similar in size and shape. The young pollen cells in one nucleus stage were found as the best for detection of influence of abiotic stress. For experiments only fresh plant material was used– the buds of Tillia vulgaris with diameter 2 mm. For the cell suspension (in vitro culture) establishment modified protocol of microspore culture was applied. The cells were suspended in the MS (Murashige and Skoog) medium. For imitation of dust of urban area SiO2 nanoparticles with concentration 0.001 g/ml were dissolved in distilled water. Into 10 ml of cell suspension 1 ml of SiO2 nanoparticles suspension was added, then cells were incubated in speed shaking regime for 1 and 3 hours. As a stress factor the irradiation of cells for 20 min by UV was used (Hamamatsu light source L9566-02A, L10852 lamp, A10014-50-0110), maximum relative intensity (100%) at 365 nm and at ~310 nm (75%). Before UV irradiation the suspension of cells were placed onto a thin layer on a filter paper disk (diameter 45 mm) in a Petri dish with solid MS media. Cells without treatment were used as a control. Experiments were performed at room temperature (23-25 °C). Using flow cytometer BS FACS Software cells plot was created to determine the densest part, which was later gated using oval-shaped gate. Gate included from 95 to 99% of all cells. To determine relative fluorescence of cells logarithmic fluorescence scale in arbitrary fluorescence units were used. 3x103 gated cells were analysed from the each sample. The significant differences were found among relative fluorescence of cells from different trees after treatment with SiO2 nanoparticles and UV irradiation in comparison with the control.

Keywords: flow cytometry, fluorescence, SiO2 nanoparticles, UV irradiation

Procedia PDF Downloads 373
74 Positioning Mama Mkubwa Indigenous Model into Social Work Practice through Alternative Child Care in Tanzania: Ubuntu Perspective

Authors: Johnas Buhori, Meinrad Haule Lembuka

Abstract:

Introduction: Social work expands its boundary to accommodate indigenous knowledge and practice for better competence and services. In Tanzania, Mama Mkubwa Mkubwa (MMM) (Mother’s elder sister) is an indigenous practice of alternative child care that represents other traditional practices across African societies known as Ubuntu practice. Ubuntu is African Humanism with values and approaches that are connected to the social work. MMM focuses on using the elder sister of a deceased mother or father, a trusted elder woman from the extended family or indigenous community to provide alternative care to an orphan or vulnerable child. In Ubuntu's perspective, it takes a whole village or community to raise a child, meaning that every person in the community is responsible for child care. Methodology: A desk review method guided by Ubuntu theory was applied to enrich the study. Findings: MMM resembles the Ubuntu ideal of traditional child protection of those in need as part of alternative child care throughout Tanzanian history. Social work practice, along with other formal alternative child care, was introduced in Tanzania during the colonial era in 1940s and socio-economic problems of 1980s affected the country’s formal social welfare system, and suddenly HIV/AIDS pandemic triggered the vulnerability of children and hampered the capacity of the formal sector to provide social welfare services, including alternative child care. For decades, AIDS has contributed to an influx of orphans and vulnerable children that facilitated the re-emerging of traditional alternative child care at the community level, including MMM. MMM strongly practiced in regions where the AIDS pandemic affected the community, like Njombe, Coastal region, Kagera, etc. Despite of existing challenges, MMM remained to be the remarkably alternative child care practiced in both rural and urban communities integrated with social welfare services. Tanzania envisions a traditional mechanism of family or community environment for alternative child care with the notion that sometimes institutionalization care fails to offer children all they need to become productive members of society, and later, it becomes difficult to reconnect in the society. Implications to Social Work: MMM is compatible with social work by using strengths perspectives; MMM reflects Ubuntu's perspective on the ground of humane social work, using humane methods to achieve human goals. MMM further demonstrates the connectedness of those who care and those cared for and the inextricable link between them as Ubuntu-inspired models of social work that view children from family, community, environmental, and spiritual perspectives. Conclusion: Social work and MMM are compatible at the micro and mezzo levels; thus, application of MMM can be applied in social work practice beyond Tanzania when properly designed and integrated into other systems. When MMM is applied in social work, alternative care has the potential to support not only children but also empower families and communities. Since MMM is a community-owned and voluntary base, it can relieve the government, social workers, and other formal sectors from the annual burden of cost in the provision of institutionalized alternative child care.

Keywords: ubuntu, indigenous social work, african social work, ubuntu social work, child protection, child alternative care

Procedia PDF Downloads 42
73 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 62
72 Immobilization of Superoxide Dismutase Enzyme on Layered Double Hydroxide Nanoparticles

Authors: Istvan Szilagyi, Marko Pavlovic, Paul Rouster

Abstract:

Antioxidant enzymes are the most efficient defense systems against reactive oxygen species, which cause severe damage in living organisms and industrial products. However, their supplementation is problematic due to their high sensitivity to the environmental conditions. Immobilization on carrier nanoparticles is a promising research direction towards the improvement of their functional and colloidal stability. In that way, their applications in biomedical treatments and manufacturing processes in the food, textile and cosmetic industry can be extended. The main goal of the present research was to prepare and formulate antioxidant bionanocomposites composed of superoxide dismutase (SOD) enzyme, anionic clay (layered double hydroxide, LDH) nanoparticle and heparin (HEP) polyelectrolyte. To characterize the structure and the colloidal stability of the obtained compounds in suspension and solid state, electrophoresis, dynamic light scattering, transmission electron microscopy, spectrophotometry, thermogravimetry, X-ray diffraction, infrared and fluorescence spectroscopy were used as experimental techniques. LDH-SOD composite was synthesized by enzyme immobilization on the clay particles via electrostatic and hydrophobic interactions, which resulted in a strong adsorption of the SOD on the LDH surface, i.e., no enzyme leakage was observed once the material was suspended in aqueous solutions. However, the LDH-SOD showed only limited resistance against salt-induced aggregation and large irregularly shaped clusters formed during short term interval even at lower ionic strengths. Since sufficiently high colloidal stability is a key requirement in most of the applications mentioned above, the nanocomposite was coated with HEP polyelectrolyte to develop highly stable suspensions of primary LDH-SOD-HEP particles. HEP is a natural anticoagulant with one of the highest negative line charge density among the known macromolecules. The experimental results indicated that it strongly adsorbed on the oppositely charged LDH-SOD surface leading to charge inversion and to the formation of negatively charged LDH-SOD-HEP. The obtained hybrid materials formed stable suspension even under extreme conditions, where classical colloid chemistry theories predict rapid aggregation of the particles and unstable suspensions. Such a stabilization effect originated from electrostatic repulsion between the particles of the same sign of charge as well as from steric repulsion due to the osmotic pressure raised during the overlap of the polyelectrolyte chains adsorbed on the surface. In addition, the SOD enzyme kept its structural and functional integrity during the immobilization and coating processes and hence, the LDH-SOD-HEP bionanocomposite possessed excellent activity in decomposition of superoxide radical anions, as revealed in biochemical test reactions. In conclusion, due to the improved colloidal stability and the good efficiency in scavenging superoxide radical ions, the developed enzymatic system is a promising antioxidant candidate for biomedical or other manufacturing processes, wherever the aim is to decompose reactive oxygen species in suspensions.

Keywords: clay, enzyme, polyelectrolyte, formulation

Procedia PDF Downloads 241
71 Designing Disaster Resilience Research in Partnership with an Indigenous Community

Authors: Suzanne Phibbs, Christine Kenney, Robyn Richardson

Abstract:

The Sendai Framework for Disaster Risk Reduction called for the inclusion of indigenous people in the design and implementation of all hazard policies, plans, and standards. Ensuring that indigenous knowledge practices were included alongside scientific knowledge about disaster risk was also a key priority. Indigenous communities have specific knowledge about climate and natural hazard risk that has been developed over an extended period of time. However, research within indigenous communities can be fraught with issues such as power imbalances between the researcher and researched, the privileging of researcher agendas over community aspirations, as well as appropriation and/or inappropriate use of indigenous knowledge. This paper documents the process of working alongside a Māori community to develop a successful community-led research project. Research Design: This case study documents the development of a qualitative community-led participatory project. The community research project utilizes a kaupapa Māori research methodology which draws upon Māori research principles and concepts in order to generate knowledge about Māori resilience. The research addresses a significant gap in the disaster research literature relating to indigenous knowledge about collective hazard mitigation practices as well as resilience in rurally isolated indigenous communities. The research was designed in partnership with the Ngāti Raukawa Northern Marae Collective as well as Ngā Wairiki Ngāti Apa (a group of Māori sub-tribes who are located in the same region) and will be conducted by Māori researchers utilizing Māori values and cultural practices. The research project aims and objectives, for example, are based on themes that were identified as important to the Māori community research partners. The research methodology and methods were also negotiated with and approved by the community. Kaumātua (Māori elders) provided cultural and ethical guidance over the proposed research process and will continue to provide oversight over the conduct of the research. Purposive participant recruitment will be facilitated with support from local Māori community research partners, utilizing collective marae networks and snowballing methods. It is envisaged that Māori participants’ knowledge, experiences and views will be explored using face-to-face communication research methods such as workshops, focus groups and/or semi-structured interviews. Interviews or focus groups may be held in English and/or Te Reo (Māori language) to enhance knowledge capture. Analysis, knowledge dissemination, and co-authorship of publications will be negotiated with the Māori community research partners. Māori knowledge shared during the research will constitute participants’ intellectual property. New knowledge, theory, frameworks, and practices developed by the research will be co-owned by Māori, the researchers, and the host academic institution. Conclusion: An emphasis on indigenous knowledge systems within the Sendai Framework for Disaster Risk Reduction risks the appropriation and misuse of indigenous experiences of disaster risk identification, mitigation, and response. The research protocol underpinning this project provides an exemplar of collaborative partnership in the development and implementation of an indigenous project that has relevance to policymakers, academic researchers, other regions with indigenous communities and/or local disaster risk reduction knowledge practices.

Keywords: community resilience, indigenous disaster risk reduction, Maori, research methods

Procedia PDF Downloads 101