Search results for: two-dimensional solute transport system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19010

Search results for: two-dimensional solute transport system

1340 Analysis of the Treatment Hemorrhagic Stroke in Multidisciplinary City Hospital №1 Nur-Sultan

Authors: M. G. Talasbayen, N. N. Dyussenbayev, Y. D. Kali, R. A. Zholbarysov, Y. N. Duissenbayev, I. Z. Mammadinova, S. M. Nuradilov

Abstract:

Background. Hemorrhagic stroke is an acute cerebrovascular accident resulting from rupture of a cerebral vessel or increased permeability of the wall and imbibition of blood into the brain parenchyma. Arterial hypertension is a common cause of hemorrhagic stroke. Male gender and age over 55 years is a risk factor for intracerebral hemorrhage. Treatment of intracerebral hemorrhage is aimed at the primary pathophysiological link: the relief of coagulopathy and the control of arterial hypertension. Early surgical treatment can limit cerebral compression; prevent toxic effects of blood to the brain parenchyma. Despite progress in the development of neuroimaging data, the use of minimally invasive techniques, and navigation system, mortality from intracerebral hemorrhage remains high. Materials and methods. The study included 78 patients (62.82% male and 37.18% female) with a verified diagnosis of hemorrhagic stroke in the period from 2019 to 2021. The age of patients ranged from 25 to 80 years, the average age was 54.66±11.9 years. Demographic, brain CT data (localization, volume of hematomas), methods of treatment, and disease outcome were analyzed. Results. The retrospective analyze demonstrate that 78.2% of all patients underwent surgical treatment: decompressive craniectomy in 37.7%, craniotomy with hematoma evacuation in 29.5%, and hematoma draining in 24.59% cases. The study of the proportion of deaths, depending on the volume of intracerebral hemorrhage, shows that the number of deaths was higher in the group with a hematoma volume of more than 60 ml. Evaluation of the relationship between the time before surgery and mortality demonstrates that the most favorable outcome is observed during surgical treatment in the interval from 3 to 24 hours. Mortality depending on age did not reveal a significant difference between age groups. An analysis of the impact of the surgery type on mortality reveals that decompressive craniectomy with or without hematoma evacuation led to an unfavorable outcome in 73.9% of cases, while craniotomy with hematoma evacuation and drainage led to mortality only in 28.82% cases. Conclusion. Even though the multimodal approaches, the development of surgical techniques and equipment, and the selection of optimal conservative therapy, the question of determining the tactics of managing and treating hemorrhagic strokes is still controversial. Nevertheless, our experience shows that surgical intervention within 24 hours from the moment of admission and craniotomy with hematoma evacuation improves the prognosis of treatment outcomes.

Keywords: hemorragic stroke, Intracerebral hemorrhage, surgical treatment, stroke mortality

Procedia PDF Downloads 108
1339 Seal and Heal Miracle Ointment: Effects of Cryopreserved and Lyophilized Amniotic Membrane on Experimentally Induced Diabetic Balb/C Mice

Authors: Elizalde D. Bana

Abstract:

Healing restores continuity and form through cell replication; hence, conserving structural integrity. In response to the worldwide pressing problem of chronic wounds in the healthcare delivery system, the researcher aims to provide effective intervention to preserve the structural integrity of the person. The wound healing effects of cryopreserved and lyophilized amniotic membrane (AM) of a term fetus embedded into two (2) concentrations (1.5 % and 1.0 %) of absorption-based ointment has been evaluated in vivo using the excision wound healing model 1x1 cm size. The total protein concentration in full term fetus was determined by the Biuret and Bradford methods, which are based on UV-visible spectroscopy. The percentages of protein presence in 9.5 mg (Mass total sample) of Amniotic membrane ranges between 14.77 – 14.46 % in Bradford method, while slightly lower to 13.78 – 13.80 % concentration in Biuret method, respectively. Bradford method evidently showed higher sensitivity for proteins than Biuret test. Overall, the amniotic membrane is composed principally of proteins in which a copious amount of literature substantially proved its healing abilities. After which, an area of 1 cm by 1 cm skin tissue was excised to its full thickness from the dorsolateral aspect of the isogenic mice and was applied twice a day with the ointment formulation having two (2) concentrations for the diabetic group and non-diabetic group. The wounds of each animal were left undressed and its area was measured every other day by a standard measurement formula from day 2,4,6,8,10,12 and 14. By the 14th day, the ointment containing 1.5 % of AM in absorption-based ointment applied to non-diabetic and diabetic group showed 100 % healing. The wound areas in the animals treated with the standard antibiotic, Mupirocin Ointment (Brand X) showed a 100% healing by the 14th day but with traces of scars, indicating that AM prepared from cryopreservation and lyophilization, at that given concentration, had a better wound healing property than the standard antibiotic. Four (4) multivariate tests were used which showed a significant interaction between days and treatments, meaning that the ointments prepared in two differing concentrations and induced in different groups of the mice had a significant effect on the percent of contraction over time. Furthermore, the evaluations of its effectiveness to wound healing were all significant although in differing degrees. It is observed that the higher the concentrations of amniotic membrane, the more effective are the results.

Keywords: wounds, healing, amniotic membrane ointments, biomedical, stem cell

Procedia PDF Downloads 303
1338 Optimization Based Design of Decelerating Duct for Pumpjets

Authors: Mustafa Sengul, Enes Sahin, Sertac Arslan

Abstract:

Pumpjets are one of the marine propulsion systems frequently used in underwater vehicles nowadays. The reasons for frequent use of pumpjet as a propulsion system are that it has higher relative efficiency at high speeds, better cavitation, and acoustic performance than its rivals. Pumpjets are composed of rotor, stator, and duct, and there are two different types of pumpjet configurations depending on the desired hydrodynamic characteristic, which are with accelerating and decelerating duct. Pumpjet with an accelerating channel is used at cargo ships where it works at low speeds and high loading conditions. The working principle of this type of pumpjet is to maximize the thrust by reducing the pressure of the fluid through the channel and throwing the fluid out from the channel with high momentum. On the other hand, for decelerating ducted pumpjets, the main consideration is to prevent the occurrence of the cavitation phenomenon by increasing the pressure of the fluid about the rotor region. By postponing the cavitation, acoustic noise naturally falls down, so decelerating ducted systems are used at noise-sensitive vehicle systems where acoustic performance is vital. Therefore, duct design becomes a crucial step during pumpjet design. This study, it is aimed to optimize the duct geometry of a decelerating ducted pumpjet for a highly speed underwater vehicle by using proper optimization tools. The target output of this optimization process is to obtain a duct design that maximizes fluid pressure around the rotor region to prevent from cavitation and minimizes drag force. There are two main optimization techniques that could be utilized for this process which are parameter-based optimization and gradient-based optimization. While parameter-based algorithm offers more major changes in interested geometry, which makes user to get close desired geometry, gradient-based algorithm deals with minor local changes in geometry. In parameter-based optimization, the geometry should be parameterized first. Then, by defining upper and lower limits for these parameters, design space is created. Finally, by proper optimization code and analysis, optimum geometry is obtained from this design space. For this duct optimization study, a commercial codedparameter-based optimization algorithm is used. To parameterize the geometry, duct is represented with b-spline curves and control points. These control points have x and y coordinates limits. By regarding these limits, design space is generated.

Keywords: pumpjet, decelerating duct design, optimization, underwater vehicles, cavitation, drag minimization

Procedia PDF Downloads 211
1337 Structure and Tribological Properties of Moisture Insensitivity Si Containing Diamond-Like Carbon Film

Authors: Mingjiang Dai, Qian Shi, Fang Hu, Songsheng Lin, Huijun Hou, Chunbei Wei

Abstract:

A diamond-like carbon (DLC) is considered as a promising protective film since its high hardness and excellent tribological properties. However, DLC films are very sensitive to the environmental condition, its friction coefficient could dramatic change in high humidity, therefore, limited their further application in aerospace, the watch industry, and micro/nano-electromechanical systems. Therefore, most studies focus on the low friction coefficient of DLC films at a high humid environment. However, this is out of satisfied in practical application. An important thing was ignored is that the DLC coated components are usually used in the diversed environment, which means its friction coefficient may evidently change in different humid condition. As a result, the invalidation of DLC coated components or even sometimes disaster occurred. For example, DLC coated minisize gears were used in the watch industry, and the customer may frequently transform their locations with different weather and humidity even in one day. If friction coefficient is not stable in dry and high moisture conditions, the watch will be inaccurate. Thus, it is necessary to investigate the stable tribological behavior of DLC films in various environments. In this study, a-C:H:Si films were deposited by multi-function magnetron sputtering system, containing one ion source device and a pair of SiC dual mid-frequent targets and two direct current Ti/C targets. Hydrogenated carbon layers were manufactured by sputtering the graphite target in argon and methane gasses. The silicon was doped in DLC coatings by sputtering silicon carbide targets and the doping content were adjusted by mid-frequent sputtering current. The microstructure of the film was characterized by Raman spectrometry, X-ray photoelectron spectroscopy, and transmission electron microscopy while its friction behavior under different humidity conditions was studied using a ball-on-disc tribometer. The a-C:H films with Si content from 0 to 17at.% were obtained and the influence of Si content on the structure and tribological properties under the relative humidity of 50% and 85% were investigated. Results show that the a-C:H:Si film has typical diamond-like characteristics, in which Si mainly existed in the form of Si, SiC, and SiO2. As expected, the friction coefficient of a-C:H films can be effectively changed after Si doping, from 0.302 to 0.176 in RH 50%. The further test shows that the friction coefficient value of a-C:H:Si film in RH 85% is first increase and then decrease as a function of Si content. We found that the a-C:H:Si films with a Si content of 3.75 at.% show a stable friction coefficient of 0.13 in different humidity environment. It is suggestion that the sp3/sp2 ratio of a-C:H films with 3.75 at.% Si was higher than others, which tend to form the silica-gel-like sacrificial layers during friction tests. Therefore, the films deliver stable low friction coefficient under controlled RH value of 50 and 85%.

Keywords: diamond-like carbon, Si doping, moisture environment, table low friction coefficient

Procedia PDF Downloads 368
1336 Lean Implementation in a Nurse Practitioner Led Pediatric Primary Care Clinic: A Case Study

Authors: Lily Farris, Chantel E. Canessa, Rena Heathcote, Susan Shumay, Suzanna V. McRae, Alissa Collingridge, Minna K. Miller

Abstract:

Objective: To describe how the Lean approach can be applied to improve access, quality and safety of care in an ambulatory pediatric primary care setting. Background: Lean was originally developed by Toyota manufacturing in Japan, and subsequently adapted for use in the healthcare sector. Lean is a systematic approach, focused on identifying and reducing waste within organizational processes, improving patient-centered care and efficiency. Limited literature is available on the implementation of the Lean methodologies in a pediatric ambulatory care setting. Methods: A strategic continuous improvement event or Rapid Process Improvement Workshop (RPIW) was launched with the aim evaluating and structurally supporting clinic workflow, capacity building, sustainability, and ultimately improving access to care and enhancing the patient experience. The Lean process consists of five specific activities: Current state/process assessment (value stream map); development of a future state map (value stream map after waste reduction); identification, quantification and prioritization of the process improvement opportunities; implementation and evaluation of process changes; and audits to sustain the gains. Staff engagement is a critical component of the Lean process. Results: Through the implementation of the RPIW and shifting workload among the administrative team, four hours of wasted time moving between desks and doing work was eliminated from the Administrative Clerks role. To streamline clinic flow, the Nursing Assistants completed patient measurements and vitals for Nurse Practitioners, reducing patient wait times and adding value to the patients visit with the Nurse Practitioners. Additionally, through the Nurse Practitioners engagement in the Lean processes a need was recognized to articulate clinic vision, mission and the alignment of NP role and scope of practice with the agency and Ministry of Health strategic plan. Conclusions: Continuous improvement work in the Pediatric Primary Care NP Clinic has provided a unique opportunity to improve the quality of care delivered and has facilitated further alignment of the daily continuous improvement work with the strategic priorities of the Ministry of Health.

Keywords: ambulatory care, lean, pediatric primary care, system efficiency

Procedia PDF Downloads 304
1335 Challenging Weak Central Coherence: An Exploration of Neurological Evidence from Visual Processing and Linguistic Studies in Autism Spectrum Disorder

Authors: Jessica Scher Lisa, Eric Shyman

Abstract:

Autism spectrum disorder (ASD) is a neuro-developmental disorder that is characterized by persistent deficits in social communication and social interaction (i.e. deficits in social-emotional reciprocity, nonverbal communicative behaviors, and establishing/maintaining social relationships), as well as by the presence of repetitive behaviors and perseverative areas of interest (i.e. stereotyped or receptive motor movements, use of objects, or speech, rigidity, restricted interests, and hypo or hyperactivity to sensory input or unusual interest in sensory aspects of the environment). Additionally, diagnoses of ASD require the presentation of symptoms in the early developmental period, marked impairments in adaptive functioning, and a lack of explanation by general intellectual impairment or global developmental delay (although these conditions may be co-occurring). Over the past several decades, many theories have been developed in an effort to explain the root cause of ASD in terms of atypical central cognitive processes. The field of neuroscience is increasingly finding structural and functional differences between autistic and neurotypical individuals using neuro-imaging technology. One main area this research has focused upon is in visuospatial processing, with specific attention to the notion of ‘weak central coherence’ (WCC). This paper offers an analysis of findings from selected studies in order to explore research that challenges the ‘deficit’ characterization of a weak central coherence theory as opposed to a ‘superiority’ characterization of strong local coherence. The weak central coherence theory has long been both supported and refuted in the ASD literature and has most recently been increasingly challenged by advances in neuroscience. The selected studies lend evidence to the notion of amplified localized perception rather than deficient global perception. In other words, WCC may represent superiority in ‘local processing’ rather than a deficit in global processing. Additionally, the right hemisphere and the specific area of the extrastriate appear to be key in both the visual and lexicosemantic process. Overactivity in the striate region seems to suggest inaccuracy in semantic language, which lends itself to support for the link between the striate region and the atypical organization of the lexicosemantic system in ASD.

Keywords: autism spectrum disorder, neurology, visual processing, weak coherence

Procedia PDF Downloads 130
1334 A Geographical Spatial Analysis on the Benefits of Using Wind Energy in Kuwait

Authors: Obaid AlOtaibi, Salman Hussain

Abstract:

Wind energy is associated with many geographical factors including wind speed, climate change, surface topography, environmental impacts, and several economic factors, most notably the advancement of wind technology and energy prices. It is the fastest-growing and least economically expensive method for generating electricity. Wind energy generation is directly related to the characteristics of spatial wind. Therefore, the feasibility study for the wind energy conversion system is based on the value of the energy obtained relative to the initial investment and the cost of operation and maintenance. In Kuwait, wind energy is an appropriate choice as a source of energy generation. It can be used in groundwater extraction in agricultural areas such as Al-Abdali in the north and Al-Wafra in the south, or in fresh and brackish groundwater fields or remote and isolated locations such as border areas and projects away from conventional power electricity services, to take advantage of alternative energy, reduce pollutants, and reduce energy production costs. The study covers the State of Kuwait with an exception of metropolitan area. Climatic data were attained through the readings of eight distributed monitoring stations affiliated with Kuwait Institute for Scientific Research (KISR). The data were used to assess the daily, monthly, quarterly, and annual available wind energy accessible for utilization. The researchers applied the Suitability Model to analyze the study by using the ArcGIS program. It is a model of spatial analysis that compares more than one location based on grading weights to choose the most suitable one. The study criteria are: the average annual wind speed, land use, topography of land, distance from the main road networks, urban areas. According to the previous criteria, the four proposed locations to establish wind farm projects are selected based on the weights of the degree of suitability (excellent, good, average, and poor). The percentage of areas that represents the most suitable locations with an excellent rank (4) is 8% of Kuwait’s area. It is relatively distributed as follows: Al-Shqaya, Al-Dabdeba, Al-Salmi (5.22%), Al-Abdali (1.22%), Umm al-Hayman (0.70%), North Wafra and Al-Shaqeeq (0.86%). The study recommends to decision-makers to consider the proposed location (No.1), (Al-Shqaya, Al-Dabdaba, and Al-Salmi) as the most suitable location for future development of wind farms in Kuwait, this location is economically feasible.

Keywords: Kuwait, renewable energy, spatial analysis, wind energy

Procedia PDF Downloads 151
1333 Development of a Process Method to Manufacture Spreads from Powder Hardstock

Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien

Abstract:

It has been over 200 years since margarine was discovered and manufactured using liquid oil, liquified hardstock oils and other oil phase & aqueous phase ingredients. Henry W. Bradley first used vegetable oils in liquid state and around 1871, since then; spreads have been traditionally manufactured using liquified oils. The main objective of this study was to develop a process method to produce spreads using spray dried hardstock fat powders as a structing fats in place of current liquid structuring fats. A high shear mixing system was used to condition the fat phase and the aqueous phase was prepared separately. Using a single scraped surface heat exchanger and pin stirrer, margarine was produced. The process method was developed for to produce spreads with 40%, 50% and 60% fat . The developed method was divided into three steps. In the first step, fat powders were conditioned by melting and dissolving them into liquid oils. The liquified portion of the oils were at 65 °C, whilst the spray dried fat powder was at 25 °C. The two were mixed using a mixing vessel at 900 rpm for 4 minutes. The rest of the ingredients i.e., lecithin, colorant, vitamins & flavours were added at ambient conditions to complete the fat/ oil phase. The water phase was prepared separately by mixing salt, water, preservative, acidifier in the mixing tank. Milk was also separately prepared by pasteurizing it at 79°C prior to feeding it into the aqueous phase. All the water phase contents were chilled to 8 °C. The oil phase and water phase were mixed in a tank, then fed into a single scraped surface heat exchanger. After the scraped surface heat exchanger, the emulsion was fed in a pin stirrer to work the formed crystals and produce margarine. The margarine produced using the developed process had fat levels of 40%, 50% and 60%. The margarine passed all the qualitative, stability, and taste assessments. The scores were 6/10, 7/10 & 7.5/10 for the 40%, 50% & 60% fat spreads, respectively. The success of the trials brought about differentiated knowledge on how to manufacture spreads using non micronized spray dried fat powders as hardstock. Manufacturers do not need to store structuring fats at 80-90°C and even high in winter, instead, they can adapt their processes to use fat powders which need to be stored at 25 °C. The developed process method used one scrape surface heat exchanger instead of the four to five currently used in votator based plants. The use of a single scraped surface heat exchanger translated to about 61% energy savings i.e., 23 kW per ton of product. Furthermore, it was found that the energy saved by implementing separate pasteurization was calculated to be 6.5 kW per ton of product produced.

Keywords: margarine emulsion, votator technology, margarine processing, scraped sur, fat powders

Procedia PDF Downloads 91
1332 CO₂ Conversion by Low-Temperature Fischer-Tropsch

Authors: Pauline Bredy, Yves Schuurman, David Farrusseng

Abstract:

To fulfill climate objectives, the production of synthetic e-fuels using CO₂ as a raw material appears as part of the solution. In particular, Power-to-Liquid (PtL) concept combines CO₂ with hydrogen supplied from water electrolysis, powered by renewable sources, which is currently gaining interest as it allows the production of sustainable fossil-free liquid fuels. The proposed process discussed here is an upgrading of the well-known Fischer-Tropsch synthesis. The concept deals with two cascade reactions in one pot, with first the conversion of CO₂ into CO via the reverse water gas shift (RWGS) reaction, which is then followed by the Fischer-Tropsch Synthesis (FTS). Instead of using a Fe-based catalyst, which can carry out both reactions, we have chosen the strategy to decouple the two functions (RWGS and FT) on two different catalysts within the same reactor. The FTS shall shift the equilibrium of the RWGS reaction (which alone would be limited to 15-20% of conversion at 250°C) by converting the CO into hydrocarbons. This strategy shall enable optimization of the catalyst pair and thus lower the temperature of the reaction thanks to the equilibrium shift to gain selectivity in the liquid fraction. The challenge lies in maximizing the activity of the RWGS catalyst but also in the ability of the FT catalyst to be highly selective. Methane production is the main concern as the energetic barrier of CH₄ formation is generally lower than that of the RWGS reaction, so the goal will be to minimize methane selectivity. Here we report the study of different combinations of copper-based RWGS catalysts with different cobalt-based FTS catalysts. We investigated their behaviors under mild process conditions by the use of high-throughput experimentation. Our results show that at 250°C and 20 bars, Cobalt catalysts mainly act as methanation catalysts. Indeed, CH₄ selectivity never drops under 80% despite the addition of various protomers (Nb, K, Pt, Cu) on the catalyst and its coupling with active RWGS catalysts. However, we show that the activity of the RWGS catalyst has an impact and can lead to longer hydrocarbons chains selectivities (C₂⁺) of about 10%. We studied the influence of the reduction temperature on the activity and selectivity of the tandem catalyst system. Similar selectivity and conversion were obtained at reduction temperatures between 250-400°C. This leads to the question of the active phase of the cobalt catalysts, which is currently investigated by magnetic measurements and DRIFTS. Thus, in coupling it with a more selective FT catalyst, better results are expected. This was achieved using a cobalt/iron FTS catalyst. The CH₄ selectivity dropped to 62% at 265°C, 20 bars, and a GHSV of 2500ml/h/gcat. We propose that the conditions used for the cobalt catalysts could have generated this methanation because these catalysts are known to have their best performance around 210°C in classical FTS, whereas the iron catalysts are more flexible but are also known to have an RWGS activity.

Keywords: cobalt-copper catalytic systems, CO₂-hydrogenation, Fischer-Tropsch synthesis, hydrocarbons, low-temperature process

Procedia PDF Downloads 60
1331 Optimizing Hydrogen Production from Biomass Pyro-Gasification in a Multi-Staged Fluidized Bed Reactor

Authors: Chetna Mohabeer, Luis Reyes, Lokmane Abdelouahed, Bechara Taouk

Abstract:

In the transition to sustainability and the increasing use of renewable energy, hydrogen will play a key role as an energy carrier. Biomass has the potential to accelerate the realization of hydrogen as a major fuel of the future. Pyro-gasification allows the conversion of organic matter mainly into synthesis gas, or “syngas”, majorly constituted by CO, H2, CH4, and CO2. A second, condensable fraction of biomass pyro-gasification products are “tars”. Under certain conditions, tars may decompose into hydrogen and other light hydrocarbons. These conditions include two types of cracking: homogeneous cracking, where tars decompose under the effect of temperature ( > 1000 °C), and heterogeneous cracking, where catalysts such as olivine, dolomite or biochar are used. The latter process favors cracking of tars at temperatures close to pyro-gasification temperatures (~ 850 °C). Pyro-gasification of biomass coupled with water-gas shift is the most widely practiced process route for biomass to hydrogen today. In this work, an innovating solution will be proposed for this conversion route, in that all the pyro-gasification products, not only methane, will undergo processes that aim to optimize hydrogen production. First, a heterogeneous cracking step was included in the reaction scheme, using biochar (remaining solid from the pyro-gasification reaction) as catalyst and CO2 and H2O as gasifying agents. This process was followed by a catalytic steam methane reforming (SMR) step. For this, a Ni-based catalyst was tested under different reaction conditions to optimize H2 yield. Finally, a water-gas shift (WGS) reaction step with a Fe-based catalyst was added to optimize the H2 yield from CO. The reactor used for cracking was a fluidized bed reactor, and the one used for SMR and WGS was a fixed bed reactor. The gaseous products were analyzed continuously using a µ-GC (Fusion PN 074-594-P1F). With biochar as bed material, it was seen that more H2 was obtained with steam as a gasifying agent (32 mol. % vs. 15 mol. % with CO2 at 900 °C). CO and CH4 productions were also higher with steam than with CO2. Steam as gasifying agent and biochar as bed material were hence deemed efficient parameters for the first step. Among all parameters tested, CH4 conversions approaching 100 % were obtained from SMR reactions using Ni/γ-Al2O3 as a catalyst, 800 °C, and a steam/methane ratio of 5. This gave rise to about 45 mol % H2. Experiments about WGS reaction are currently being conducted. At the end of this phase, the four reactions are performed consecutively, and the results analyzed. The final aim is the development of a global kinetic model of the whole system in a multi-stage fluidized bed reactor that can be transferred on ASPEN PlusTM.

Keywords: multi-staged fluidized bed reactor, pyro-gasification, steam methane reforming, water-gas shift

Procedia PDF Downloads 140
1330 The Regulation of Alternative Dispute Resolution Institutions in Consumer Redress and Enforcement: A South African Perspective

Authors: Jacolien Barnard, Corlia Van Heerden

Abstract:

Effective and accessible consensual dispute resolution and in particular alternative dispute resolution, are central to consumer protection legislation. In this regard, the Consumer Protection Act 68 of 2008 (CPA) of South Africa is no exception. Due to the nature of consumer disputes, alternative dispute resolution (in theory) is an effective vehicle for the adjudication of disputes in a timely manner avoiding overburdening of the courts. The CPA sets down as one of its core purposes the provision of ‘an accessible, consistent, harmonized, effective and efficient system of redress for consumers’ (section 3(1)(h) of the CPA). Section 69 of the Act provides for the enforcement of consumer rights and provides for the National Consumer Commission to be the Central Authority which streamlines, adjudicates and channels disputes to the appropriate forums which include Alternative Dispute Resolution Agents (ADR-agents). The purpose of this paper is to analyze the regulation of these enforcement and redress mechanisms with particular focus on the Central Authority as well as the ADR-agents and their crucial role in successful and efficient adjudication of disputes in South Africa. The South African position will be discussed comparatively with the European Union (EU) position. In this regard, the European Union (EU) Directive on Alternative Dispute Resolution for Consumer Disputes (2013/11/EU) will be discussed (The ADR Directive). The aim of the ADR Directive is to solve contractual disputes between consumers and traders (suppliers or businesses) regardless of whether the agreement was concluded offline or online or whether or not the trader is situated in another member state (Recitals 4-6). The ADR Directive provides for a set of quality requirements that an ADR body or entity tasked with resolving consumer disputes should adhere to in member states which include regulatory mechanisms for control. Transparency, effectiveness, fairness, liberty and legality are all requirements for a successful ADR body and discussed within this chapter III of the Directive. Chapters III and IV govern the importance of information and co-operation. This includes information between ADR bodies and the European Commission (EC) but also between ADR bodies or entities and national authorities enforcing legal acts on consumer protection and traders. (In South Africa the National Consumer Tribunal, Provincial Consumer Protectors and Industry ombuds come to mind). All of which have a responsibility to keep consumers informed. Ultimately the papers aims to provide recommendations as to the successfulness of the current South African position in light of the comparative position in Europe and the highlight the importance of proper regulation of these redress and enforcement institutions.

Keywords: alternative dispute resolution, consumer protection law, enforcement, redress

Procedia PDF Downloads 242
1329 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 171
1328 Tuning the Surface Roughness of Patterned Nanocellulose Films: An Alternative to Plastic Based Substrates for Circuit Priniting in High-Performance Electronics

Authors: Kunal Bhardwaj, Christine Browne

Abstract:

With the increase in global awareness of the environmental impacts of plastic-based products, there has been a massive drive to reduce our use of these products. Use of plastic-based substrates in electronic circuits has been a matter of concern recently. Plastics provide a very smooth and cheap surface for printing high-performance electronics due to their non-permeability to ink and easy mouldability. In this research, we explore the use of nano cellulose (NC) films in electronics as they provide an advantage of being 100% recyclable and eco-friendly. The main hindrance in the mass adoption of NC film as a substitute for plastic is its higher surface roughness which leads to ink penetration, and dispersion in the channels on the film. This research was conducted to tune the RMS roughness of NC films to a range where they can replace plastics in electronics(310-470nm). We studied the dependence of the surface roughness of the NC film on the following tunable aspects: 1) composition by weight of the NC suspension that is sprayed on a silicon wafer 2) the width and the depth of the channels on the silicon wafer used as a base. Various silicon wafers with channel depths ranging from 6 to 18 um and channel widths ranging from 5 to 500um were used as a base. Spray coating method for NC film production was used and two solutions namely, 1.5wt% NC and a 50-50 NC-CNC (cellulose nanocrystal) mixture in distilled water, were sprayed through a Wagner sprayer system model 117 at an angle of 90 degrees. The silicon wafer was kept on a conveyor moving at a velocity of 1.3+-0.1 cm/sec. Once the suspension was uniformly sprayed, the mould was left to dry in an oven at 50°C overnight. The images of the films were taken with the help of an optical profilometer, Olympus OLS 5000. These images were converted into a ‘.lext’ format and analyzed using Gwyddion, a data and image analysis software. Lowest measured RMS roughness of 291nm was with a 50-50 CNC-NC mixture, sprayed on a silicon wafer with a channel width of 5 µm and a channel depth of 12 µm. Surface roughness values of 320+-17nm were achieved at lower (5 to 10 µm) channel widths on a silicon wafer. This research opened the possibility of the usage of 100% recyclable NC films with an additive (50% CNC) in high-performance electronics. Possibility of using additives like Carboxymethyl Cellulose (CMC) is also being explored due to the hypothesis that CMC would reduce friction amongst fibers, which in turn would lead to better conformations amongst the NC fibers. CMC addition would thus be able to help tune the surface roughness of the NC film to an even greater extent in future.

Keywords: nano cellulose films, electronic circuits, nanocrystals and surface roughness

Procedia PDF Downloads 126
1327 Network Analysis to Reveal Microbial Community Dynamics in the Coral Reef Ocean

Authors: Keigo Ide, Toru Maruyama, Michihiro Ito, Hiroyuki Fujimura, Yoshikatu Nakano, Shoichiro Suda, Sachiyo Aburatani, Haruko Takeyama

Abstract:

Understanding environmental system is one of the important tasks. In recent years, conservation of coral environments has been focused for biodiversity issues. The damage of coral reef under environmental impacts has been observed worldwide. However, the casual relationship between damage of coral and environmental impacts has not been clearly understood. On the other hand, structure/diversity of marine bacterial community may be relatively robust under the certain strength of environmental impact. To evaluate the coral environment conditions, it is necessary to investigate relationship between marine bacterial composition in coral reef and environmental factors. In this study, the Time Scale Network Analysis was developed and applied to analyze the marine environmental data for investigating the relationship among coral, bacterial community compositions and environmental factors. Seawater samples were collected fifteen times from November 2014 to May 2016 at two locations, Ishikawabaru and South of Sesoko in Sesoko Island, Okinawa. The physicochemical factors such as temperature, photosynthetic active radiation, dissolved oxygen, turbidity, pH, salinity, chlorophyll, dissolved organic matter and depth were measured at the coral reef area. Metagenome and metatranscriptome in seawater of coral reef were analyzed as the biological factors. Metagenome data was used to clarify marine bacterial community composition. In addition, functional gene composition was estimated from metatranscriptome. For speculating the relationships between physicochemical and biological factors, cross-correlation analysis was applied to time scale data. Even though cross-correlation coefficients usually include the time precedence information, it also included indirect interactions between the variables. To elucidate the direct regulations between both factors, partial correlation coefficients were combined with cross correlation. This analysis was performed against all parameters such as the bacterial composition, the functional gene composition and the physicochemical factors. As the results, time scale network analysis revealed the direct regulation of seawater temperature by photosynthetic active radiation. In addition, concentration of dissolved oxygen regulated the value of chlorophyll. Some reasonable regulatory relationships between environmental factors indicate some part of mechanisms in coral reef area.

Keywords: coral environment, marine microbiology, network analysis, omics data analysis

Procedia PDF Downloads 255
1326 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements

Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo

Abstract:

Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.

Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation

Procedia PDF Downloads 179
1325 The Good Form of a Sustainable Creative Learning City Based on “The Theory of a Good City Form“ by Kevin Lynch

Authors: Fatemeh Moosavi, Tumelo Franck Nkoshwane

Abstract:

Peter Drucker the renowned management guru once said, “The best way to predict the future is to create it.” Mr. Drucker is also the man who placed human capital as the most vital resource of any institution. As such any institution bent on creating a better future, requires a competent human capital, one that is able to execute with efficiency and effectiveness the objective a society aspires to. Technology today is accelerating the rate at which many societies transition to knowledge based societies. In this accelerated paradigm, it is imperative that those in leadership establish a platform capable of sustaining the planned future; intellectual capital. The capitalist economy going into the future will not just be sustained by dollars and cents, but by individuals who possess the creativity to enterprise, innovate and create wealth from ideas. This calls for cities of the future, to have this premise at the heart of their future plan, if the objective of designing sustainable and liveable future cities will be realised. The knowledge economy, now transitioning to the creative economy, requires cities of the future to be ‘gardens’ of inspiration, to be places where knowledge, creativity, and innovation can thrive as these instruments are becoming critical assets for creating wealth in the new economic system. Developing nations must accept that learning is a lifelong process that requires keeping abreast with change and should invest in teaching people how to keep learning. The need to continuously update one’s knowledge, turn these cities into vibrant societies, where new ideas create knowledge and in turn enriches the quality of life of the residents. Cities of the future must have as one of their objectives, the ability to motivate their citizens to learn, share knowledge, evaluate the knowledge and use it to create wealth for a just society. The five functional factors suggested by Kevin Lynch;-vitality, meaning/sense, adaptability, access, control, and monitoring should form the basis on which policy makers and urban designers base their plans for future cities. The authors of this paper believe that developing nations “creative economy clusters”, cities where creative industries drive the need for constant new knowledge creating sustainable learning creative cities. Obviously the form, shape and size of these districts should be cognisant of the environmental, cultural and economic characteristics of each locale. Gaborone city in the republic of Botswana is presented as the case study for this paper.

Keywords: learning city, sustainable creative city, creative industry, good city form

Procedia PDF Downloads 312
1324 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 174
1323 Job Resource, Personal Resource, Engagement and Performance with Balanced Score Card in the Integrated Textile Companies in Indonesia

Authors: Nurlaila Effendy

Abstract:

Companies in Asia face a number of constraints in tight competitiveness in ASEAN Economic Community 2015 and globalization. An economic capitalism system as an integral part of globalization processing brings broad impacts. They need to improve business performance in globalization and ASEAN Economic Community. Organizational development has quite clearly demonstrated that aligning individual’s personal goals with the goals of the organization translates into measurable and sustained performance improvement. Human capital is a key to achieve company performance. Employee Engagement (EE) creates and expresses themselves physically, cognitively and emotionally to achieve company goals and individual goals. One will experience a total involvement when they undertake their jobs and feel a self integration to their job and organization. A leader plays key role in attaining the goals and objectives of a company/organization. Any Manager in a company needs to have leadership competence and global mindset. As one the of positive organizational behavior developments, psychological capital (PsyCap) is assumed to be one of the most important capitals in the global mindset, in addition to intellectual capital and social capital. Textile companies also need to face a number of constraints in tight competitiveness in regional and global. This research involved 42 managers in two textiles and a spinning companies in a group, in Central Java, Indonesia. It is a quantitative research with Partial Least Squares (PLS) studying job resource (Social Support & Organizational Climate) and Personal Resource (4 dimensions of Psychological Capital & Leadership Competence) as prediction of Employee Engagement, also Employee Engagement and leadership competence as prediction of leader’s performance. The performance of a leader is measured by means of achievement on objective strategies in terms of 4 perspectives (financial and non-financial perspectives) in a Balanced Score Card (BSC). It took one year during a business plan of year 2014, from January to December 2014. The result of this research is there is correlation between Job Resource (coefficient value of Social Support is 0.036 & coefficient value of organizational climate is 0.220) and Personal Resource (coefficient value of PsyCap is 0.513 & coefficient value of Leadership Competence is 0.249) with employee engagement. There is correlation between employee engagement (coefficient value is 0.279) and leadership competence (coefficient value is 0.581) with performance.

Keywords: organizational climate, social support, psychological capital leadership competence, employee engagement, performance, integrated textile companies

Procedia PDF Downloads 437
1322 Microglia Activation in Animal Model of Schizophrenia

Authors: Esshili Awatef, Manitz Marie-Pierre, Eßlinger Manuela, Gerhardt Alexandra, Plümper Jennifer, Wachholz Simone, Friebe Astrid, Juckel Georg

Abstract:

Maternal immune activation (MIA) resulting from maternal viral infection during pregnancy is a known risk factor for schizophrenia. The neural mechanisms by which maternal infections increase the risk for schizophrenia remain unknown, although the prevailing hypothesis argues that an activation of the maternal immune system induces changes in the maternal-fetal environment that might interact with fetal brain development. It may lead to an activation of fetal microglia inducing long-lasting functional changes of these cells. Based on post-mortem analysis showing an increased number of activated microglial cells in patients with schizophrenia, it can be hypothesized that these cells contribute to disease pathogenesis and may actively be involved in gray matter loss observed in such patients. In the present study, we hypothesize that prenatal treatment with the inflammatory agent Poly(I:C) during embryogenesis at contributes to microglial activation in the offspring, which may, therefore, represent a contributing factor to the pathogenesis of schizophrenia and underlines the need for new pharmacological treatment options. Pregnant rats were treated with intraperitoneal injections a single dose of Poly(I:C) or saline on gestation day 17. Brains of control and Poly(I:C) offspring, were removed and into 20-μm-thick coronal sections were cut by using a Cryostat. Brain slices were fixed and immunostained with ba1 antibody. Subsequently, Iba1-immunoreactivity was detected using a secondary antibody, goat anti-rabbit. The sections were viewed and photographed under microscope. The immunohistochemical analysis revealed increases in microglia cell number in the prefrontal cortex, in offspring of poly(I:C) treated-rats as compared to the controls injected with NaCl. However, no significant differences were observed in microglia activation in the cerebellum among the groups. Prenatal immune challenge with Poly(I:C) was able to induce long-lasting changes in the offspring brains. This lead to a higher activation of microglia cells in the prefrontal cortex, a brain region critical for many higher brain functions, including working memory and cognitive flexibility. which might be implicated in possible changes in cortical neuropil architecture in schizophrenia. Further studies will be needed to clarify the association between microglial cells activation and schizophrenia-related behavioral alterations.

Keywords: Microglia, neuroinflammation, PolyI:C, schizophrenia

Procedia PDF Downloads 420
1321 Effects of Ubiquitous 360° Learning Environment on Clinical Histotechnology Competence

Authors: Mari A. Virtanen, Elina Haavisto, Eeva Liikanen, Maria Kääriäinen

Abstract:

Rapid technological development and digitalization has affected also on higher education. During last twenty years multiple of electronic and mobile learning (e-learning, m-learning) platforms have been developed and have become prevalent in many universities and in the all fields of education. Ubiquitous learning (u-learning) is not that widely known or used. Ubiquitous learning environments (ULE) are the new era of computer-assisted learning. They are based on ubiquitous technology and computing that fuses the learner seamlessly into learning process by using sensing technology as tags, badges or barcodes and smart devices like smartphones and tablets. ULE combines real-life learning situations into virtual aspects and can be flexible used in anytime and anyplace. The aim of this study was to assess the effects of ubiquitous 360 o learning environment on higher education students’ clinical histotechnology competence. A quasi-experimental study design was used. 57 students in biomedical laboratory science degree program was assigned voluntarily to experiment (n=29) and to control group (n=28). Experimental group studied via ubiquitous 360o learning environment and control group via traditional web-based learning environment (WLE) in a 8-week educational intervention. Ubiquitous 360o learning environment (ULE) combined authentic learning environment (histotechnology laboratory), digital environment (virtual laboratory), virtual microscope, multimedia learning content, interactive communication tools, electronic library and quick response barcodes placed into authentic laboratory. Web-based learning environment contained equal content and components with the exception of the use of mobile device, interactive communication tools and quick response barcodes. Competence of clinical histotechnology was assessed by using knowledge test and self-report developed for this study. Data was collected electronically before and after clinical histotechnology course and analysed by using descriptive statistics. Differences among groups were identified by using Wilcoxon test and differences between groups by using Mann-Whitney U-test. Statistically significant differences among groups were identified in both groups (p<0.001). Competence scores in post-test were higher in both groups, than in pre-test. Differences between groups were very small and not statistically significant. In this study the learning environment have developed based on 360o technology and successfully implemented into higher education context. And students’ competence increases when ubiquitous learning environment were used. In the future, ULE can be used as a learning management system for any learning situation in health sciences. More studies are needed to show differences between ULE and WLE.

Keywords: competence, higher education, histotechnology, ubiquitous learning, u-learning, 360o

Procedia PDF Downloads 287
1320 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications

Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon

Abstract:

The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.

Keywords: analysis, automated fibre placement, high speed, splicing

Procedia PDF Downloads 157
1319 Urban Waste Water Governance in South Africa: A Case Study of Stellenbosch

Authors: R. Malisa, E. Schwella, K. I. Theletsane

Abstract:

Due to climate change, population growth and rapid urbanization, the demand for water in South Africa is inevitably surpassing supply. To address similar challenges globally, there has been a paradigm shift from conventional urban waste water management “government” to a “governance” paradigm. From the governance paradigm, Integrated Urban Water Management (IUWM) principle emerged. This principle emphasizes efficient urban waste water treatment and production of high-quality recyclable effluent. In so doing mimicking natural water systems, in their processes of recycling water efficiently, and averting depletion of natural water resources.  The objective of this study was to investigate drivers of shifting the current urban waste water management approach from a “government” paradigm towards “governance”. The study was conducted through Interactive Management soft systems research methodology which follows a qualitative research design. A case study methodology was employed, guided by realism research philosophy. Qualitative data gathered were analyzed through interpretative structural modelling using Concept Star for Professionals Decision-Making tools (CSPDM) version 3.64.  The constructed model deduced that the main drivers in shifting the Stellenbosch municipal urban waste water management towards IUWM “governance” principles are mainly social elements characterized by overambitious expectations of the public on municipal water service delivery, mis-interpretation of the constitution on access to adequate clean water and sanitation as a human right and perceptions on recycling water by different communities. Inadequate public participation also emerged as a strong driver. However, disruptive events such as draught may play a positive role in raising an awareness on the value of water, resulting in a shift on the perceptions on recycled water. Once the social elements are addressed, the alignment of governance and administration elements towards IUWM are achievable. Hence, the point of departure for the desired paradigm shift is the change of water service authorities and serviced communities’ perceptions and behaviors towards shifting urban waste water management approaches from “government” to “governance” paradigm.

Keywords: integrated urban water management, urban water system, wastewater governance, wastewater treatment works

Procedia PDF Downloads 159
1318 ‘Transnationalism and the Temporality of Naturalized Citizenship

Authors: Edward Shizha

Abstract:

Citizenship is not only political, but it is also a socio-cultural expectation that naturalized immigrants desire for. However, the outcomes of citizenship desirability are determined by forces outside the individual’s control based on legislation and laws that are designed at the macro and exosystemic levels by politicians and policy makers. These laws are then applied to determine the status (permanency or temporariness) of citizenship for immigrants and refugees, but the same laws do not apply to non-immigrant citizens who attain it by birth. While theoretically, citizenship has generally been considered an irrevocable legal status and the highest and most secure legal status one can hold in a state, it is not inviolate for immigrants. While Article 8 of the United Nations Convention on the Reduction of Statelessness provides grounds for revocation of citizenship obtained by immigrants and refugees in host countries, nation-states have their own laws tied to the convention that provide grounds for revocation. Ever since the 9/11 attacks in the USA, there has been a rise in conditional citizenship and the state’s withdrawal of citizenship through revocation laws that denaturalize citizens who end up not merely losing their citizenship but also the right to reside in the country of immigration. Because immigrants can be perceived as a security threat, the securitization of citizenship and the legislative changes have been adopted to specifically allow greater discretionary power in stripping people of their citizenship.The paper ‘Do We Really Belong Here?’ Transnationalism and the Temporality of Naturalized Citizenship examines literature on the temporality of naturalized citizenship and questions whether citizenship, for newcomers (immigrants and refugees), is a protected human right or a privilege. The paper argues that citizenship in a host country is a well sought-after status by newcomers. The question is whether their citizenship, if granted, has a permanent or temporary status and whether it is treated in the same way as that of non-immigrant citizens. The paper further argues that, despite citizenship having generally been considered an irrevocable status in most Western countries, in practice, if not in law, for immigrants and refugees, citizenship comes with strings attached because of policies and laws that control naturalized citizenship. These laws can be used to denationalize naturalized citizens through revocations for those stigmatized as ‘undesirables’ who are threatened with deportation. Whereas non-immigrant citizens (those who attain it by birth) have absolute right to their citizenship, this is seldom the case for immigrants.This paper takes a multidisciplinary approach using Urie Bronfenbrenner’s ecological systems theory, the macrosystem and exo-system, to examine and review literature on the temporality of naturalized citizenship and questions whether citizenship is a protected right or a privilege for immigrants. The paper challenges the human rights violation of citizenship revocation and argues for equality of treatment for all citizens despite how they acquired their citizenship. The fragility of naturalized citizenship undermines the basic rights and securities that citizenship status can provide to the person as an inclusive practice in a diverse society.

Keywords: citizenship, citizenship revocation, dual citizenship, human rights, naturalization, naturalized citizenship

Procedia PDF Downloads 79
1317 Changes in Cognition of Elderly People: A Longitudinal Study in Kanchanaburi Province, Thailand

Authors: Natchaphon Auampradit, Patama Vapattanawong, Sureeporn Punpuing, Malee Sunpuwan, Tawanchai Jirapramukpitak

Abstract:

Longitudinal studies related to cognitive impairment in elderly are necessary for health promotion and development. The purposes of this study were (1) to examine changes in cognition of elderly over time and (2) to examine the impacts of changes in social determinants of health (SDH) toward changes in cognition of elderly by using the secondary data derived from the Kanchanaburi Demographic Surveillance System (KDSS) by the Institute for Population and Social Research (IPSR) which contained longitudinal data on individuals, households, and villages. Two selected projects included the Health and Social Support for Elderly in KDSS in 2007 and the Population, Economic, Social, Cultural, and Long-term Care Surveillance for Thai Elderly People’s Health Promotion in 2011. The samples were 586 elderly participated in both projects. SDH included living arrangement, social relationships with children, relatives, and friends, household asset-based wealth index, household monthly income, loans for livings, loans for investment, and working status. Cognitive impairment was measured by category fluency and delayed recall. This study employed Generalized Estimating Equation (GEE) model to investigate changes in cognition by taking SDH and other variables such as age, gender, marital status, education, and depression into the model. The unstructured correlation structure was selected to use for analysis. The results revealed that 24 percent of elderly had cognitive impairment at baseline. About 13 percent of elderly still had cognitive impairment during 2007 until 2011. About 21 percent and 11 percent of elderly had cognitive decline and cognitive improvement, respectively. The cross-sectional analysis showed that household asset-based wealth index, social relationship with friends, working status, age, marital status, education, and depression were significantly associated with cognitive impairment. The GEE model revealed longitudinal effects of household asset-based wealth index and working status against cognition during 2007 until 2011. There was no longitudinal effect of social conditions against cognition. Elderly living with richer household asset-based wealth index, still being employed, and being younger were less likely to have cognitive impairment. The results strongly suggested that poorer household asset-based wealth index and being unemployed were served as a risk factor for cognitive impairment over time. Increasing age was still the major risk for cognitive impairment as well.

Keywords: changes in cognition, cognitive impairment, elderly, KDSS, longitudinal study

Procedia PDF Downloads 145
1316 Social Business Evaluation in Brazil: Analysis of Entrepreneurship and Investor Practices

Authors: Erica Siqueira, Adriana Bin, Rachel Stefanuto

Abstract:

The paper aims to identify and to discuss the impact and results of ex-ante, mid-term and ex-post evaluation initiatives in Brazilian Social Enterprises from the point of view of the entrepreneurs and investors, highlighting the processes involved in these activities and their aftereffects. The study was conducted using a descriptive methodology, primarily qualitative. A multiple-case study was used, and, for that, semi-structured interviews were conducted with ten entrepreneurs in the (i) social finance, (ii) education, (iii) health, (iv) citizenship and (v) green tech fields, as well as three representatives of various impact investments, which are (i) venture capital, (ii) loan and (iii) equity interest areas. Convenience (non-probabilistic) sampling was adopted to select both businesses and investors, who voluntarily contributed to the research. The evaluation is still incipient in most of the studied business cases. Some stand out by adopting well-known methodologies like Global Impact Investing Report System (GIIRS), but still, have a lot to improve in several aspects. Most of these enterprises use nonexperimental research conducted by their own employees, which is ordinarily not understood as 'golden standard' to some authors in the area. Nevertheless, from the entrepreneur point of view, it is possible to identify that most of them including those routines in some extent in their day-by-day activities, despite the difficulty they have of the business in general. In turn, the investors do not have overall directions to establish evaluation initiatives in respective enterprises; they are funding. There is a mechanism of trust, and this is, usually, enough to prove the impact for all stakeholders. The work concludes that there is a large gap between what the literature states in regard to what should be the best practices in these businesses and what the enterprises really do. The evaluation initiatives must be included in some extension in all enterprises in order to confirm social impact that they realize. Here it is recommended the development and adoption of more flexible evaluation mechanisms that consider the complexity involved in these businesses’ routines. The reflections of the research also suggest important implications for the field of Social Enterprises, whose practices are far from what the theory preaches. It highlights the risk of the legitimacy of these enterprises that identify themselves as 'social impact', sometimes without the proper proof based on causality data. Consequently, this makes the field of social entrepreneurship fragile and susceptible to questioning, weakening the ecosystem as a whole. In this way, the top priorities of these enterprises must be handled together with the results and impact measurement activities. Likewise, it is recommended to perform further investigations that consider the trade-offs between impact versus profit. In addition, research about gender, the entrepreneur motivation to call themselves as Social Enterprises, and the possible unintended consequences from these businesses also should be investigated.

Keywords: evaluation practices, impact, results, social enterprise, social entrepreneurship ecosystem

Procedia PDF Downloads 122
1315 Effect of Laser Ablation OTR Films and High Concentration Carbon Dioxide for Maintaining the Freshness of Strawberry ‘Maehyang’ for Export in Modified Atmosphere Condition

Authors: Hyuk Sung Yoon, In-Lee Choi, Min Jae Jeong, Jun Pill Baek, Ho-Min Kang

Abstract:

This study was conducted to improve storability by using suitable laser ablation oxygen transmission rate (OTR) films and effectiveness of high carbon dioxide at strawberry 'Maehyang' for export. Strawberries were grown by hydroponic system in Gyeongsangnam-do province. These strawberries were packed by different laser ablation OTR films (Daeryung Co., Ltd.) such as 1,300 cc, 20,000 cc, 40,000 cc, 80,000 cc, and 100,000 cc•m-2•day•atm. And CO2 injection (30%) treatment was used 20,000 cc•m-2•day•atm OTR film and perforated film was as a control. Temperature conditions were applied simulated shipping and distribution conditions from Korea to Singapore, there were stored at 3 ℃ (13 days), 10 ℃ (an hour), and 8 ℃ (7 days) for 20 days. Fresh weight loss rate was under 1% as maximum permissible weight loss in treated OTR films except perforated film as a control during storage. Carbon dioxide concentration within a package for the storage period showed a lower value than the maximum CO2 concentration tolerated range (15 %) in treated OTR films and even the concentration of high OTR film treatment; from 20,000cc to 100,000cc were less than 3%. 1,300 cc had a suitable carbon dioxide range as over 5 % under 15 % at 5 days after storage until finished experiments and CO2 injection treatment was quickly drop the 15 % at storage after 1 day, but it kept around 15 % during storage. Oxygen concentration was maintained between 10 to 15 % in 1,300 cc and CO2 injection treatments, but other treatments were kept in 19 to 21 %. Ethylene concentration was showed very higher concentration at the CO2 injection treatment than OTR treatments. In the OTR treatments, 1,300 cc showed the highest concentration in ethylene and 20,000 cc film had lowest. Firmness was maintained highest in 1,300cc, but there was not shown any significant differences among other OTR treatments. Visual quality had shown the best result in 20,000 cc that showed marketable quality until 20 days after storage. 20,000 cc and perforated film had better than other treatments in off-odor and the 1,300 cc and CO2 injection treatments have occurred strong off-odor even after 10 minutes. As a result of the difference between Hunter ‘L’ and ‘a’ values of chroma meter, the 1,300cc and CO2 injection treatments were delayed color developments and other treatments did not shown any significant differences. The results indicate that effectiveness for maintaining the freshness was best achieved at 20,000 cc•m-2•day•atm. Although 1,300 cc and CO2 injection treatments were in appropriate MA condition, it showed darkening of strawberry calyx and excessive reduction of coloring due to high carbon dioxide concentration during storage. While 1,300cc and CO2 injection treatments were considered as appropriate treatments for exports to Singapore, but the result was shown different. These results are based on cultivar characteristics of strawberry 'Maehyang'.

Keywords: carbon dioxide, firmness, shelf-life, visual quality

Procedia PDF Downloads 400
1314 Integrating Machine Learning and Rule-Based Decision Models for Enhanced B2B Sales Forecasting and Customer Prioritization

Authors: Wenqi Liu, Reginald Bailey

Abstract:

This study proposes a comprehensive and effective approach to business-to-business (B2B) sales forecasting by integrating advanced machine learning models with a rule-based decision-making framework. The methodology addresses the critical challenge of optimizing sales pipeline performance and improving conversion rates through predictive analytics and actionable insights. The first component involves developing a classification model to predict the likelihood of conversion, aiming to outperform traditional methods such as logistic regression in terms of accuracy, precision, recall, and F1 score. Feature importance analysis highlights key predictive factors, such as client revenue size and sales velocity, providing valuable insights into conversion dynamics. The second component focuses on forecasting sales value using a regression model, designed to achieve superior performance compared to linear regression by minimizing mean absolute error (MAE), mean squared error (MSE), and maximizing R-squared metrics. The regression analysis identifies primary drivers of sales value, further informing data-driven strategies. To bridge the gap between predictive modeling and actionable outcomes, a rule-based decision framework is introduced. This model categorizes leads into high, medium, and low priorities based on thresholds for conversion probability and predicted sales value. By combining classification and regression outputs, this framework enables sales teams to allocate resources effectively, focus on high-value opportunities, and streamline lead management processes. The integrated approach significantly enhances lead prioritization, increases conversion rates, and drives revenue generation, offering a robust solution to the declining pipeline conversion rates faced by many B2B organizations. Our findings demonstrate the practical benefits of blending machine learning with decision-making frameworks, providing a scalable, data-driven solution for strategic sales optimization. This study underscores the potential of predictive analytics to transform B2B sales operations, enabling more informed decision-making and improved organizational outcomes in competitive markets.

Keywords: machine learning, XGBoost, regression, decision making framework, system engineering

Procedia PDF Downloads 27
1313 Wave Powered Airlift PUMP for Primarily Artificial Upwelling

Authors: Bruno Cossu, Elio Carlo

Abstract:

The invention (patent pending) relates to the field of devices aimed to harness wave energy (WEC) especially for artificial upwelling, forced downwelling, production of compressed air. In its basic form, the pump consists of a hydro-pneumatic machine, driven by wave energy, characterised by the fact that it has no moving mechanical parts, and is made up of only two structural components: an hollow body, which is open at the bottom to the sea and partially immersed in sea water, and a tube, both joined together to form a single body. The shape of the hollow body is like a mushroom whose cap and stem are hollow; the stem is open at both ends and the lower part of its surface is crossed by holes; the tube is external and coaxial to the stem and is joined to it so as to form a single body. This shape of the hollow body and the type of connection to the tube allows the pump to operate simultaneously as an air compressor (OWC) on the cap side, and as an airlift on the stem side. The pump can be implemented in four versions, each of which provides different variants and methods of implementation: 1) firstly, for the artificial upwelling of cold, deep ocean water; 2) secondly, for the lifting and transfer of these waters to the place of use (above all, fish farming plants), even if kilometres away; 3) thirdly, for the forced downwelling of surface sea water; 4) fourthly, for the forced downwelling of surface water, its oxygenation, and the simultaneous production of compressed air. The transfer of the deep water or the downwelling of the raised surface water (as for pump versions indicated in points 2 and 3 above), is obtained by making the water raised by the airlift flow into the upper inlet of another pipe, internal or adjoined to the airlift; the downwelling of raised surface water, oxygenation, and the simultaneous production of compressed air (as for the pump version indicated in point 4), is obtained by installing a venturi tube on the upper end of the pipe, whose restricted section is connected to the external atmosphere, so that it also operates like a hydraulic air compressor (trompe). Furthermore, by combining one or more pumps for the upwelling of cold, deep water, with one or more pumps for the downwelling of the warm surface water, the system can be used in an Ocean Thermal Energy Conversion plant to supply the cold and the warm water required for the operation of the same, thus allowing to use, without increased costs, in addition to the mechanical energy of the waves, for the purposes indicated in points 1 to 4, the thermal one of the marine water treated in the process.

Keywords: air lifted upwelling, fish farming plant, hydraulic air compressor, wave energy converter

Procedia PDF Downloads 148
1312 21st Century Business Dynamics: Acting Local and Thinking Global through Extensive Business Reporting Language (XBRL)

Authors: Samuel Faboyede, Obiamaka Nwobu, Samuel Fakile, Dickson Mukoro

Abstract:

In the present dynamic business environment of corporate governance and regulations, financial reporting is an inevitable and extremely significant process for every business enterprise. Several financial elements such as Annual Reports, Quarterly Reports, ad-hoc filing, and other statutory/regulatory reports provide vital information to the investors and regulators, and establish trust and rapport between the internal and external stakeholders of an organization. Investors today are very demanding, and emphasize greatly on authenticity, accuracy, and reliability of financial data. For many companies, the Internet plays a key role in communicating business information, internally to management and externally to stakeholders. Despite high prominence being attached to external reporting, it is disconnected in most companies, who generate their external financial documents manually, resulting in high degree of errors and prolonged cycle times. Chief Executive Officers and Chief Financial Officers are increasingly susceptible to endorsing error-laden reports, late filing of reports, and non-compliance with regulatory acts. There is a lack of common platform to manage the sensitive information – internally and externally – in financial reports. The Internet financial reporting language known as eXtensible Business Reporting Language (XBRL) continues to develop in the face of challenges and has now reached the point where much of its promised benefits are available. This paper looks at the emergence of this revolutionary twenty-first century language of digital reporting. It posits that today, the world is on the brink of an Internet revolution that will redefine the ‘business reporting’ paradigm. The new Internet technology, eXtensible Business Reporting Language (XBRL), is already being deployed and used across the world. It finds that XBRL is an eXtensible Markup Language (XML) based information format that places self-describing tags around discrete pieces of business information. Once tags are assigned, it is possible to extract only desired information, rather than having to download or print an entire document. XBRL is platform-independent and it will work on any current or recent-year operating system, or any computer and interface with virtually any software. The paper concludes that corporate stakeholders and the government cannot afford to ignore the XBRL. It therefore recommends that all must act locally and think globally now via the adoption of XBRL that is changing the face of worldwide business reporting.

Keywords: XBRL, financial reporting, internet, internal and external reports

Procedia PDF Downloads 290
1311 Progressive Damage Analysis of Mechanically Connected Composites

Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan

Abstract:

While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values ​​, and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.

Keywords: puck, finite element, bolted joint, composite

Procedia PDF Downloads 104