Search results for: pedagogical approaches
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4317

Search results for: pedagogical approaches

417 “Uninformed” Religious Orientation Can Lead to Violence in Any Given Community: The Case of African Independence Churches in South Africa

Authors: Ngwako Daniel Sebola

Abstract:

Introductory Statement: Religions are necessary as they offer and teach something to their adherence. People in one religion may not have a complete understanding of the Supreme Being (Deity) in a certain religion other than their own. South Africa, like other countries in the world, consists of various religions, including Christianity. Almost 80% of South African population adheres to the Christian faith, though in different denominations and sects. Each church fulfils spiritual needs that perhaps others cannot fill. African Independent Churches is one of the denominations in the country. These churches arose as a protest to the Western forms and expressions of Christianity. Their major concern was to develop an indigenous expression of Christianity. The relevance of African Independent Churches includes addressing the needs of the people holistically. Controlling diseases was an important aspect of change in different historical periods. Through healing services, leaders of African churches are able to attract many followers. The healing power associated with the founders of many African Initiated Churches leads to people following and respecting them as true leaders within many African communities. Despite its strong points, African Independent Churches, like many others, face a variety of challenges, especially conflicts. Ironically, destructive conflicts resulted in violence.. Such violence demonstrates a lack of informed religious orientation among those concerned. This paper investigates and analyses the causes of conflict and violence in the African Independent Church. The researcher used the Shembe and International Pentecostal Holiness Churches, in South Africa, as a point of departure. As a solution to curb violence, the researcher suggests useful strategies in handling conflicts. Methodology: Comparative and qualitative approaches have been used as methods of collecting data in this research. The intention is to analyse the similarities and differences of violence among members of the Shembe and International Pentecostal Holiness Churches. Equally important, the researcher aims to obtain data through interviews, questionnaires, focus groups, among others. The researcher aims to interview fifteen individuals from both churches. Finding: Leadership squabbles and power struggle appear to be the main contributing factors of violence in many Independent Churches. Ironically, violence resulted in the loss of life and destruction of properties, like in the case of the Shembe and International Pentecostal Holiness Churches. Violence is an indication that congregations and some leaders have not been properly equipped to deal with conflict. Concluding Statement: Conflict is a common part of every human existence in any given community. The concern is when such conflict becomes contagious; it leads to violence. There is a need to understand consciously and objectively towards devising the appropriate measures to handle the conflict. Conflict management calls for emotional maturity, self-control, empathy, patience, tolerance and informed religious orientation.

Keywords: African, church, religion, violence

Procedia PDF Downloads 99
416 Pre-Cooling Strategies for the Refueling of Hydrogen Cylinders in Vehicular Transport

Authors: C. Hall, J. Ramos, V. Ramasamy

Abstract:

Hydrocarbon-based fuel vehicles are a major contributor to air pollution due to harmful emissions produced, leading to a demand for cleaner fuel types. A leader in this pursuit is hydrogen, with its application in vehicles producing zero harmful emissions and the only by-product being water. To compete with the performance of conventional vehicles, hydrogen gas must be stored on-board of vehicles in cylinders at high pressures (35–70 MPa) and have a short refueling duration (approximately 3 mins). However, the fast-filling of hydrogen cylinders causes a significant rise in temperature due to the combination of the negative Joule-Thompson effect and the compression of the gas. This can lead to structural failure and therefore, a maximum allowable internal temperature of 85°C has been imposed by the International Standards Organization. The technological solution to tackle the issue of rapid temperature rise during the refueling process is to decrease the temperature of the gas entering the cylinder. Pre-cooling of the gas uses a heat exchanger and requires energy for its operation. Thus, it is imperative to determine the least amount of energy input that is required to lower the gas temperature for cost savings. A validated universal thermodynamic model is used to identify an energy-efficient pre-cooling strategy. The model requires negligible computational time and is applied to previously validated experimental cases to optimize pre-cooling requirements. The pre-cooling characteristics include the location within the refueling timeline and its duration. A constant pressure-ramp rate is imposed to eliminate the effects of rapid changes in mass flow rate. A pre-cooled gas temperature of -40°C is applied, which is the lowest allowable temperature. The heat exchanger is assumed to be ideal with no energy losses. The refueling of the cylinders is modeled with the pre-cooling split in ten percent time intervals. Furthermore, varying burst durations are applied in both the early and late stages of the refueling procedure. The model shows that pre-cooling in the later stages of the refuelling process is more energy-efficient than early pre-cooling. In addition, the efficiency of pre-cooling towards the end of the refueling process is independent of the pressure profile at the inlet. This leads to the hypothesis that pre-cooled gas should be applied as late as possible in the refueling timeline and at very low temperatures. The model had shown a 31% reduction in energy demand whilst achieving the same final gas temperature for a refueling scenario when pre-cooling was applied towards the end of the process. The identification of the most energy-efficient refueling approaches whilst adhering to the safety guidelines is imperative to reducing the operating cost of hydrogen refueling stations. Heat exchangers are energy-intensive and thus, reducing the energy requirement would lead to cost reduction. This investigation shows that pre-cooling should be applied as late as possible and for short durations.

Keywords: cylinder, hydrogen, pre-cooling, refueling, thermodynamic model

Procedia PDF Downloads 74
415 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli

Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma

Abstract:

Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.

Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis

Procedia PDF Downloads 243
414 Coupling Strategy for Multi-Scale Simulations in Micro-Channels

Authors: Dahia Chibouti, Benoit Trouette, Eric Chenier

Abstract:

With the development of micro-electro-mechanical systems (MEMS), understanding fluid flow and heat transfer at the micrometer scale is crucial. In the case where the flow characteristic length scale is narrowed to around ten times the mean free path of gas molecules, the classical fluid mechanics and energy equations are still valid in the bulk flow, but particular attention must be paid to the gas/solid interface boundary conditions. Indeed, in the vicinity of the wall, on a thickness of about the mean free path of the molecules, called the Knudsen layer, the gas molecules are no longer in local thermodynamic equilibrium. Therefore, macroscopic models based on the continuity of velocity, temperature and heat flux jump conditions must be applied at the fluid/solid interface to take this non-equilibrium into account. Although these macroscopic models are widely used, the assumptions on which they depend are not necessarily verified in realistic cases. In order to get rid of these assumptions, simulations at the molecular scale are carried out to study how molecule interaction with walls can change the fluid flow and heat transfers at the vicinity of the walls. The developed approach is based on a kind of heterogeneous multi-scale method: micro-domains overlap the continuous domain, and coupling is carried out through exchanges of information between both the molecular and the continuum approaches. In practice, molecular dynamics describes the fluid flow and heat transfers in micro-domains while the Navier-Stokes and energy equations are used at larger scales. In this framework, two kinds of micro-simulation are performed: i) in bulk, to obtain the thermo-physical properties (viscosity, conductivity, ...) as well as the equation of state of the fluid, ii) close to the walls to identify the relationships between the slip velocity and the shear stress or between the temperature jump and the normal temperature gradient. The coupling strategy relies on an implicit formulation of the quantities extracted from micro-domains. Indeed, using the results of the molecular simulations, a Bayesian regression is performed in order to build continuous laws giving both the behavior of the physical properties, the equation of state and the slip relationships, as well as their uncertainties. These latter allow to set up a learning strategy to optimize the number of micro simulations. In the present contribution, the first results regarding this coupling associated with the learning strategy are illustrated through parametric studies of convergence criteria, choice of basis functions and noise of input data. Anisothermic flows of a Lennard Jones fluid in micro-channels are finally presented.

Keywords: multi-scale, microfluidics, micro-channel, hybrid approach, coupling

Procedia PDF Downloads 148
413 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin

Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa

Abstract:

Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.

Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®

Procedia PDF Downloads 100
412 X-Ray Detector Technology Optimization in Computed Tomography

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 175
411 Combating Corruption to Enhance Learner Academic Achievement: A Qualitative Study of Zimbabwean Public Secondary Schools

Authors: Onesmus Nyaude

Abstract:

The aim of the study was to investigate participants’ views on how corruption can be combated to enhance learner academic achievement. The study was undertaken on three select public secondary institutions in Zimbabwe. This study also focuses on exploring the various views of educators; parents and the learners on the role played by corruption in perpetuating the seemingly existing learner academic achievement disparities in various educational institutions. The study further interrogates and examines the nexus between the prevalence of corruption in schools and the subsequent influence on the academic achievement of learners. Corruption is considered a form of social injustice; hence in Zimbabwe, the general consensus is that it is perceived rife to the extent that it is overtaking the traditional factors that contributed to the poor academic achievement of learners. Coupled to this, have been the issue of gross abuse of power and some malpractices emanating from concealment of essential and official transactions in the conduct of business. Through proposing robust anti-corruption mechanisms, teaching and learning resources poured in schools would be put into good use. This would prevent the unlawful diversion and misappropriation of the resources in question which has always been the culture. This study is of paramount significance to curriculum planners, teachers, parents, and learners. The study was informed by the interpretive paradigm; thus qualitative research approaches were used. Both probability and non-probability sampling techniques were adopted in ‘site and participants’ selection. A representative sample of (150) participants was used. The study found that the majority of the participants perceived corruption as a social problem and a human right threat affecting the quality of teaching and learning processes in the education sector. It was established that corruption prevalence within institutions is as a result of the perpetual weakening of ethical values and other variables linked to upholding of ‘Ubuntu’ among general citizenry. It was further established that greediness and weak systems are major causes of rampant corruption within institutions of higher learning and are manifesting through abuse of power, bribery, misappropriation and embezzlement of material and financial resources. Therefore, there is great need to collectively address the problem of corruption in educational institutions and society at large. The study additionally concludes that successful combating of corruption will promote successful moral development of students as well as safeguarding their human rights entitlements. The study recommends the adoption of principles of good corporate governance within educational institutions in order to successfully curb corruption. The study further recommends the intensification of interventionist strategies and strengthening of systems in educational institutions as well as regular audits to overcome the problem associated with rampant corruption cases.

Keywords: academic achievement, combating, corruption, good corporate governance, qualitative study

Procedia PDF Downloads 218
410 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 97
409 Teaching Timber: The Role of the Architectural Student and Studio Course within an Interdisciplinary Research Project

Authors: Catherine Sunter, Marius Nygaard, Lars Hamran, Børre Skodvin, Ute Groba

Abstract:

Globally, the construction and operation of buildings contribute up to 30% of annual green house gas emissions. In addition, the building sector is responsible for approximately a third of global waste. In this context, the utilization of renewable resources in buildings, especially materials that store carbon, will play a significant role in the growing city. These are two reasons for introducing wood as a building material with a growing relevance. A third is the potential economic value in countries with a forest industry that is not currently used to capacity. In 2013, a four-year interdisciplinary research project titled “Wood Be Better” was created, with the principle goal to produce and publicise knowledge that would facilitate increased use of wood in buildings in urban areas. The research team consisted of architects, engineers, wood technologists and mycologists, both from research institutions and industrial organisations. Five structured work packages were included in the initial research proposal. Work package 2 was titled “Design-based research” and proposed using architecture master courses as laboratories for systematic architectural exploration. The aim was twofold: to provide students with an interdisciplinary team of experts from consultancies and producers, as well as teachers and researchers, that could offer the latest information on wood technologies; whilst at the same time having the studio course test the effects of the use of wood on the functional, technical and tectonic quality within different architectural projects on an urban scale, providing results that could be fed back into the research material. The aim of this article is to examine the successes and failures of this pedagogical approach in an architecture school, as well as the opportunities for greater integration between academic research projects, industry experts and studio courses in the future. This will be done through a set of qualitative interviews with researchers, teaching staff and students of the studio courses held each semester since spring 2013. These will investigate the value of the various experts of the course; the different themes of each course; the response to the urban scale, architectural form and construction detail; the effect of working with the goals of a research project; and the value of the studio projects to the research. In addition, six sample projects will be presented as case studies. These will show how the projects related to the research and could be collected and further analysed, innovative solutions that were developed during the course, different architectural expressions that were enabled by timber, and how projects were used as an interdisciplinary testing ground for integrated architectural and engineering solutions between the participating institutions. The conclusion will reflect on the original intentions of the studio courses, the opportunities and challenges faced by students, researchers and teachers, the educational implications, and on the transparent and inclusive discourse between the architectural researcher, the architecture student and the interdisciplinary experts.

Keywords: architecture, interdisciplinary, research, studio, students, wood

Procedia PDF Downloads 285
408 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 170
407 ATR-IR Study of the Mechanism of Aluminum Chloride Induced Alzheimer Disease - Curative and Protective Effect of Lepidium sativum Water Extract on Hippocampus Rats Brain Tissue

Authors: Maha J. Balgoon, Gehan A. Raouf, Safaa Y. Qusti, Soad S. Ali

Abstract:

The main cause of Alzheimer disease (AD) was believed to be mainly due to the accumulation of free radicals owing to oxidative stress (OS) in brain tissue. The mechanism of the neurotoxicity of Aluminum chloride (AlCl3) induced AD in hippocampus Albino wister rat brain tissue, the curative & the protective effects of Lipidium sativum group (LS) water extract were assessed after 8 weeks by attenuated total reflection spectroscopy ATR-IR and histologically by light microscope. ATR-IR results revealed that the membrane phospholipid undergo free radical attacks, mediated by AlCl3, primary affects the polyunsaturated fatty acids indicated by the increased of the olefinic -C=CH sub-band area around 3012 cm-1 from the curve fitting analysis. The narrowing in the half band width(HBW) of the sνCH2 sub-band around 2852 cm-1 due to Al intoxication indicates the presence of trans form fatty acids rather than gauch rotomer. The degradation of hydrocarbon chain to shorter chain length, increasing in membrane fluidity, disorder and decreasing in lipid polarity in AlCl3 group were indicated by the detected changes in certain calculated area ratios compared to the control. Administration of LS was greatly improved these parameters compared to the AlCl3 group. Al influences the Aβ aggregation and plaque formation, which in turn interferes to and disrupts the membrane structure. The results also showed a marked increase in the β-parallel and antiparallel structure, that characterize the Aβ formation in Al-induced AD hippocampal brain tissue, indicated by the detected increase in both amide I sub-bands around 1674, 1692 cm-1. This drastic increase in Aβ formation was greatly reduced in the curative and protective groups compared to the AlCl3 group and approaches nearly the control values. These results were supported too by the light microscope. AlCl3 group showed significant marked degenerative changes in hippocampal neurons. Most cells appeared small, shrieked and deformed. Interestingly, the administration of LS in curative and protective groups markedly decreases the amount of degenerated cells compared to the non-treated group. Also the intensity of congo red stained cells was decreased. Hippocampal neurons looked more/or less similar to those of control. This study showed a promising therapeutic effect of Lipidium sativum group (LS) on AD rat model that seriously overcome the signs of oxidative stress on membrane lipid and restore the protein misfolding.

Keywords: aluminum chloride, alzheimer disease, ATR-IR, Lipidium sativum

Procedia PDF Downloads 341
406 Gender and Asylum: A Critical Reassessment of the Case Law of the European Court of Human Right and of United States Courts Concerning Gender-Based Asylum Claims

Authors: Athanasia Petropoulou

Abstract:

While there is a common understanding that a person’s sex, gender, gender identity, and sexual orientation shape every stage of the migration experience, theories of international migration had until recently not been focused on exploring and incorporating a gender perspective in their analysis. In a similar vein, refugee law has long been the object of criticisms for failing to recognize and respond appropriately to women’s and sexual minorities’ experiences of persecution. The present analysis attempts to depict the challenges faced by the European Court of Human Rights (ECtHR) and U.S. courts when adjudicating in cases involving asylum claims with a gendered perspective. By providing a comparison between adjudicating strategies of international and national jurisdictions, the article aims to identify common or distinctive approaches in addressing gendered based claims. The paper argues that, despite the different nature of the judicial bodies and the different legal instruments applied respectively, judges face similar challenges in this context and often fail to qualify and address the gendered dimensions of asylum claims properly. The ECtHR plays a fundamental role in safeguarding human rights protection in Europe not only for European citizens but also for people fleeing violence, war, and dire living conditions. However, this role becomes more difficult to fulfill, not only because of the obvious institutional constraints but also because cases related to claims of asylum seekers concern a domain closely linked to State sovereignty. Amid the current “refugee crisis,” risk assessment performed by national authorities, like in the process of asylum determination, is shaped by wider geopolitical and economic considerations. The failure to recognize and duly address the gendered dimension of non - refoulement claims, one of the many shortcomings of these processes, is reflected in the decisions of the ECtHR. As regards U.S. case law, the study argues that U.S. courts either fail to apply any connection between asylum claims and their gendered dimension or tend to approach gendered based claims through the lens of the “political opinion” or “membership of a particular social group” reasons of fear of persecution. This exercise becomes even more difficult, taking into account that the U.S. asylum law inappropriately qualifies gendered-based claims. The paper calls for more sociologically informed decision-making practices and for a more contextualized and relational approach in the assessment of the risk of ill-treatment and persecution. Such an approach is essential for unearthing the gendered patterns of persecution and addressing effectively related claims, thus securing the human rights of asylum seekers.

Keywords: asylum, European court of human rights, gender, human rights, U.S. courts

Procedia PDF Downloads 90
405 Catalytic Alkylation of C2-C4 Hydrocarbons

Authors: Bolysbek Utelbayev, Tasmagambetova Aigerim, Toktasyn Raila, Markayev Yergali, Myrzakhanov Maxat

Abstract:

Intensive development of secondary processes of destructive processing of crude oil has led to the occurrence of oil refining factories resources of C2-C4 hydrocarbons. Except for oil gases also contain basically C2-C4 hydrocarbon gases where some of the amounts are burned. All these data has induced interest to the study of producing alkylate from hydrocarbons С2-С4 which being as components of motor fuels. The purpose of this work was studying transformation propane-propene, butane-butene fractions at the presence of the ruthenium-chromic support catalyst whereas the carrier is served pillar - structural montmorillonite containing in native bentonite clay. In this work is considered condition and structure of the bentonite clay from the South-Kazakhstan area of the Republic Kazakhstan. For preparation rhodium support catalyst (0,5-1,0 mass. % Rh) was used chloride of rhodium-RhCl3∙3H2O, as a carrier was used modified bentonite clay. For modifying natural clay to pillar structural form were used polyhydroxy complexes of chromium. To aqueous solution of chloride chromium gradually flowed the solution of sodium hydroxide at gradual hashing up to pH~3-4. The concentration of chloride chromium was paid off proceeding from calculation 5-30 mmole Cr3+ per gram clay. Suspension bentonite (~1,0 mass. %) received by intensive washing it in water during 4 h, pH-water extract of clay makes -8-9. The acidity of environment supervised by means of digital pH meter OP-208/1. In order to prevent coagulation of a solution polyhydroxy complexes of chromium, it was slowly added to a suspension of clay. "Reserve of basicity" Cr3+:/OH-allowing to prevent coagulation chloride of rhodium made 1/3. After endurance processed suspensions of clay during 24 h, a deposit was washed by water and condensed. The sample, after separate from a liquid phase, dried at first at the room temperature, and then at 110°C (2h) with the subsequent rise the temperature up to 180°C (4h). After cooling the firm mass was pounded to a powder, it was shifted infractions with the certain sizes of particles. Fractions of particles modifying clay in the further were impregnated with an aqueous solution with rhodium-RhCl3∙3H2O (0,5-1,0 mаss % Rh ). Obtained pillar structural bentonite approaches heat resistance and its porous structure above the 773K. Pillar structural bentonite was used for preparation 1.0% Ru/Carrier (modifying bentonite) support catalysts where is realised alkylation of C2-C4 hydrocarbons. The process of alkylation is carried out at a partial pressure of hydrogen 0.5-1.0MPa. Outcome 2.2.4 three methyl pentane and 2.2.3 trimethylpentane achieved 40%. At alkylation butane-butene mixture outcome of the isooctane is achieved 60%. In this condition of studying the ethene is not undergoing to alkylation.

Keywords: alkylation, butene, pillar structure, ruthenium catalyst

Procedia PDF Downloads 375
404 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics

Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez

Abstract:

In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.

Keywords: data analysis, emotional domotics, performance improvement, neural network

Procedia PDF Downloads 118
403 Generating Ideas to Improve Road Intersections Using Design with Intent Approach

Authors: Omar Faruqe Hamim, M. Shamsul Hoque, Rich C. McIlroy, Katherine L. Plant, Neville A. Stanton

Abstract:

Road safety has become an alarming issue, especially in low-middle income developing countries. The traditional approaches lack the out of the box thinking, making engineers confined to applying usual techniques in making roads safer. A socio-technical approach has recently been introduced in improving road intersections through designing with intent. This Design With Intent (DWI) approach aims to give practitioners a more nuanced approach to design and behavior, working with people, people’s understanding, and the complexities of everyday human experience. It's a collection of design patterns —and a design and research approach— for exploring the interactions between design and people’s behavior across products, services, and environments, both digital and physical. Through this approach, it can be seen that how designing with people in behavior change can be applied to social and environmental problems, as well as commercially. It has a total of 101 cards across eight different lenses, such as architectural, error-proofing, interaction, ludic, perceptual, cognitive, Machiavellian, and security lens each having its own distinct characteristics of extracting ideas from the participant of this approach. For this research purpose, a three-legged accident blackspot intersection of a national highway has been chosen to perform the DWI workshop. Participants from varying fields such as civil engineering, naval architecture and marine engineering, urban and regional planning, and sociology actively participated for a day long workshop. While going through the workshops, the participants were given a preamble of the accident scenario and a brief overview of DWI approach. Design cards of varying lenses were distributed among 10 participants and given an hour and a half for brainstorming and generating ideas to improve the safety of the selected intersection. After the brainstorming session, the participants spontaneously went through roundtable discussions regarding the ideas they have come up with. According to consensus of the forum, ideas were accepted or rejected. These generated ideas were then synthesized and agglomerated to bring about an improvement scheme for the intersection selected in our study. To summarize the improvement ideas from DWI approach, color coding of traffic lanes for separate vehicles, channelizing the existing bare intersection, providing advance warning traffic signs, cautionary signs and educational signs motivating road users to drive safe, using textured surfaces at approach with rumble strips before the approach of intersection were the most significant one. The motive of this approach is to bring about new ideas from the road users and not just depend on traditional schemes to increase the efficiency, safety of roads as well and to ensure the compliance of road users since these features are being generated from the minds of users themselves.

Keywords: design with intent, road safety, human experience, behavior

Procedia PDF Downloads 111
402 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 104
401 Applying a SWOT Analysis to Inform the Educational Provision of Learners with Autism Spectrum Disorders

Authors: Claire Sciberras

Abstract:

Introduction: Autism Spectrum Disorder (ASD) has become recognized as being the most common childhood neurological condition. Indeed, numerous studies demonstrate an increase in the prevalence rate of children diagnosed with ASD. Concurrent with these findings, the European Agency for Special Needs and Inclusive Education reported a similar escalating tendency in prevalence also in Malta. Such an increase within the educational context in Malta has led the European Agency to call for increased support within educational settings in Malta. However, although research has addressed the positive impact of mainstream education on learners with ASD, empirical studies vis-à-vis the internal and external strengths and weaknesses present within the support provided in mainstream settings in Malta is distinctly limited. In light of the aforementioned argument, Malta would benefit from research which focuses on analysing the strengths, weaknesses, opportunities, and threats (SWOTs) which are present within the support provision of learners with ASD in mainstream primary schools. Such SWOT analysis is crucial as lack of appropriate opportunities might jeopardize the educational and social experiences of persons with ASD throughout their schooling. Methodology: A mixed methodological approach would be well suited to examine the provision of support of learners with ASD as the combination of qualitative and quantitative approaches allows researchers to collect a comprehensive range of data and validate their results. Hence, it is intended that questionnaires will be distributed to all the stakeholders involved so as to acquire a broader perspective to be collected from a wider group who provide support to students with ASD across schools in Malta. Moreover, the use of a qualitative approach in the form of interviews with a sample group will be implemented. Such an approach will be considered as it would potentially allow the researcher to gather an in-depth perspective vis-à-vis to the nature of the services which are currently provided to learners with ASD. The intentions of the study: Through the analysis of the data collected vis-à-vis to the SWOTs within the provision of support of learners with ASD it is intended that; i) a description in regards to the educational provision for learners with ASD within mainstream primary schools in Malta in light of the experiences and perceptions of the stakeholders involved will be acquired; ii) an analysis of the SWOTs which exist within the services for learners with ASD in primary state schools in Malta is carried out and iii) based on the SWOT analysis, recommendations that can lead to improvements in practice in the field of ASD in Malta and beyond will be provided. Conclusion: Due to the heterogeneity of individuals with ASD which spans across several deficits related to the social communication and interaction domain and also across areas linked to restricted, repetitive behavioural patterns, educational settings need to alter their standards according to the needs of their students. Thus, the standards established by schools throughout prior phases do not remain applicable forever, and therefore these need to be reviewed periodically in accordance with the diversities and the necessities of their learners.

Keywords: autism spectrum disorders, mainstream educational settings, provision of support, SWOT analysis

Procedia PDF Downloads 158
400 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 95
399 Evaluation of Rheological Properties, Anisotropic Shrinkage, and Heterogeneous Densification of Ceramic Materials during Liquid Phase Sintering by Numerical-Experimental Procedure

Authors: Hamed Yaghoubi, Esmaeil Salahi, Fateme Taati

Abstract:

The effective shear and bulk viscosity, as well as dynamic viscosity, describe the rheological properties of the ceramic body during the liquid phase sintering process. The rheological parameters depend on the physical and thermomechanical characteristics of the material such as relative density, temperature, grain size, and diffusion coefficient and activation energy. The main goal of this research is to acquire a comprehensive understanding of the response of an incompressible viscose ceramic material during liquid phase sintering process such as stress-strain relations, sintering and hydrostatic stress, the prediction of anisotropic shrinkage and heterogeneous densification as a function of sintering time by including the simultaneous influence of gravity field, and frictional force. After raw materials analysis, the standard hard porcelain mixture as a ceramic body was designed and prepared. Three different experimental configurations were designed including midpoint deflection, sinter bending, and free sintering samples. The numerical method for the ceramic specimens during the liquid phase sintering process are implemented in the CREEP user subroutine code in ABAQUS. The numerical-experimental procedure shows the anisotropic behavior, the complete difference in spatial displacement through three directions, the incompressibility for ceramic samples during the sintering process. The anisotropic shrinkage factor has been proposed to investigate the shrinkage anisotropy. It has been shown that the shrinkage along the normal axis of casting sample is about 1.5 times larger than that of casting direction, the gravitational force in pyroplastic deformation intensifies the shrinkage anisotropy more than the free sintering sample. The lowest and greatest equivalent creep strain occurs at the intermediate zone and around the central line of the midpoint distorted sample, respectively. In the sinter bending test sample, the equivalent creep strain approaches to the maximum near the contact area with refractory support. The inhomogeneity in Von-Misses, pressure, and principal stress intensifies the relative density non-uniformity in all samples, except in free sintering one. The symmetrical distribution of stress around the center of free sintering sample, cause to hinder the pyroplastic deformations. Densification results confirmed that the effective bulk viscosity was well-defined with relative density values. The stress analysis confirmed that the sintering stress is more than the hydrostatic stress from start to end of sintering time so, from both theoretically and experimentally point of view, the sintering process occurs completely.

Keywords: anisotropic shrinkage, ceramic material, liquid phase sintering process, rheological properties, numerical-experimental procedure

Procedia PDF Downloads 319
398 Smart Architecture and Sustainability in the Built Environment for the Hatay Refugee Camp

Authors: Ali Mohammed Ali Lmbash

Abstract:

The global refugee crisis points to the vital need for sustainable and resistant solutions to different kinds of problems for displaced persons all over the world. Among the myriads of sustainable concerns, however, there are diverse considerations including energy consumption, waste management, water access, and resiliency of structures. Our research aims to develop distinct ideas for sustainable architecture given the exigent problems in disaster-threatened areas starting with the Hatay Refugee camp in Turkey where the majority of the camp dwellers are Syrian refugees. Commencing community-based participatory research which focuses on the socio-environmental issues of displaced populations, this study will apply two approaches with a specific focus on the Hatay region. The initial experiment uses Richter's predictive model and simulations to forecast earthquake outcomes in refugee campers. The result could be useful in implementing architectural design tactics that enhance structural reliability and ensure the security and safety of shelters through earthquakes. In the second experiment a model is generated which helps us in predicting the quality of the existing water sources and since we understand how greatly water is vital for the well-being of humans, we do it. This research aims to enable camp administrators to employ forward-looking practices while managing water resources and thus minimizing health risks as well as building resilience of the refugees in the Hatay area. On the other side, this research assesses other sustainability problems of Hatay Refugee Camp as well. As energy consumption becomes the major issue, housing developers are required to consider energy-efficient designs as well as feasible integration of renewable energy technologies to minimize the environmental impact and improve the long-term sustainability of housing projects. Waste management is given special attention in this case by imposing recycling initiatives and waste reduction measures to reduce the pace of environmental degradation in the camp's land area. As well, study gives an insight into the social and economic reality of the camp, investigating the contribution of initiatives such as urban agriculture or vocational training to the enhancement of livelihood and community empowerment. In a similar fashion, this study combines the latest research with practical experience in order to contribute to the continuing discussion on sustainable architecture during disaster relief, providing recommendations and info that can be adapted on every scale worldwide. Through collaborative efforts and a dedicated sustainability approach, we can jointly get to the root of the cause and work towards a far more robust and equitable society.

Keywords: smart architecture, Hatay Camp, sustainability, machine learning.

Procedia PDF Downloads 19
397 Mobulid Ray Fishery Characteristics and Trends in East Java to Inform Management Decisions

Authors: Muhammad G. Salim, Betty J.L. Laglbauer, Sila K. Sari, Irianes C. Gozali, Fahmi, Didik Rudianto, Selvia Oktaviyani, Isabel Ender

Abstract:

Muncar, East Java, is one of the largest artisanal fisheries in Indonesia. Sharks and rays are caught as both target and bycatch, for local meat consumption and with some derived products exported. Of the seven mobulid ray species occurring in Indonesia, five have been recorded as retained bycatch at Muncar fishing port: the spinetail devil ray (Mobula mobular), the bentfin devil ray (Mobula thurstoni), the sicklefin devil ray (Mobula tarapacana), the oceanic manta ray (Mobula birostris) and the reef manta ray (Mobula alfredi). Both manta ray species are listed as Vulnerable by the International Union for the Conservation of Nature and are protected in Indonesia despite still being captured as bycatch, while all the three devil ray species mentioned here are listed as Endangered and do not currently benefit from any protection in Indonesian waters. Mobulid landings in East Java are caused primarily by small-scale drift gillnets but they also occasionally occur on longlines and in purse-seines operating off the coast of East Java and occasionally in fishing grounds located as far as the Makassar and Sumba Straits. Landing trends from 2015-2019 (non-continuous surveys) revealed that the highest abundance of mobulid rays at Muncar fishing port occurs during the upwelling season from June-October. During El-Nino or above-average temperature years, this may extend until November (such as in 2015 and 2019). The strong seasonal upwelling along the East Java coast is linked to higher zooplankton abundance (inferred from chlorophyll-a sea-surface concentrations), on which mobulids forage, along with teleost fishes constituting the primary target of gillnet fisheries in the Bali Strait. Mobulid ray landings in Muncar were dominated by Mobula mobular, followed by M. thurstoni, M. tarapacana, M. birostris and M. alfredi, however, the catch varied across years and seasons. A majority of immature individuals were recorded in M. mobular and M. thurstoni, and slight decreases in landings, despite no known changes in fishing effort, were observed across the upwelling seasons of 2015-2018 for M. mobular. While all mobulids are listed on Appendix II of the Convention on International Trade in Endangered Species, which regulates international trade in gill plates sought after in the Chinese Medicine Trade, local and national-level management measures are required to sustain mobulid populations. The findings presented here provide important baseline data, from which potential management approaches can be identified.

Keywords: devil ray, mobulid, manta ray, Indonesia

Procedia PDF Downloads 149
396 Sensory Interventions for Dementia: A Review

Authors: Leigh G. Hayden, Susan E. Shepley, Cristina Passarelli, William Tingo

Abstract:

Introduction: Sensory interventions are popular therapeutic and recreational approaches for people living with all stages of dementia. However, it is unknown which sensory interventions are used to achieve which outcomes across all subtypes of dementia. Methods: To address this gap, we conducted a scoping review of sensory interventions for people living with dementia. We conducted a search of the literature for any article published in English from 1 January 1990 to 1 June 2019, on any sensory or multisensory intervention targeted to people living with any kind of dementia, which reported on patient health outcomes. We did not include complex interventions where only a small aspect was related to sensory stimulation. We searched the databases Medline, CINHAL, and Psych Articles using our institutional discovery layer. We conducted all screening in duplicate to reduce Type 1 and Type 2 errors. The data from all included papers were extracted by one team member, and audited by another, to ensure consistency of extraction and completeness of data. Results: Our initial search captured 7654 articles, and the removal of duplicates (n=5329), those that didn’t pass title and abstract screening (n=1840) and those that didn’t pass full-text screening (n=281) resulted in 174 articles included. The countries with the highest publication in this area were the United States (n=59), the United Kingdom (n=26) and Australia (n=15). The most common type of interventions were music therapy (n=36), multisensory rooms (n=27) and multisensory therapies (n=25). Seven articles were published in the 1990’s, 55 in the 2000’s, and the remainder since 2010 (n=112). Discussion: Multisensory rooms have been present in the literature since the early 1990’s. However, more recently, nature/garden therapy, art therapy, and light therapy have emerged since 2008 in the literature, an indication of the increasingly diverse scholarship in the area. The least popular type of intervention is a traditional food intervention. Taste as a sensory intervention is generally avoided for safety reasons, however it shows potential for increasing quality of life. Agitation, behavior, and mood are common outcomes for all sensory interventions. However, light therapy commonly targets sleep. The majority (n=110) of studies have very small sample sizes (n=20 or less), an indicator of the lack of robust data in the field. Additional small-scale studies of the known sensory interventions will likely do little to advance the field. However, there is a need for multi-armed studies which directly compare sensory interventions, and more studies which investigate the use of layering sensory interventions (for example, adding an aromatherapy component to a lighting intervention). In addition, large scale studies which enroll people at early stages of dementia will help us better understand the potential of sensory and multisensory interventions to slow the progression of the disease.

Keywords: sensory interventions, dementia, scoping review

Procedia PDF Downloads 105
395 Genetic Diversity of Termite (Isoptera) Fauna of Western Ghats of India

Authors: A. S. Vidyashree, C. M. Kalleshwaraswamy, R. Asokan, H. M. Mahadevaswamy

Abstract:

Termites are very vital ecological thespians in tropical ecosystem, having been designated as “ecosystem engineers”, due to their significant role in providing soil ecosystem services. Despite their importance, our understanding of a number of their basic biological processes in termites is extremely limited. Developing a better understanding of termite biology is closely dependent upon consistent species identification. At present, identification of termites is relied on soldier castes. But for many species, soldier caste is not reported, that creates confusion in identification. The use of molecular markers may be helpful in estimating phylogenetic relatedness between the termite species and estimating genetic differentiation among local populations within each species. To understand this, termites samples were collected from various places of Western Ghats covering four states namely Karnataka, Kerala, Tamil Nadu, Maharashtra during 2013-15. Termite samples were identified based on their morphological characteristics, molecular characteristics, or both. Survey on the termite fauna in Karnataka, Kerala, Maharashtra and Tamil Nadu indicated the presence of a 16 species belongs to 4 subfamilies under two families viz., Rhinotermitidae and Termitidae. Termititidae was the dominant family which was belonging to 4 genera and four subfamilies viz., Macrotermitinae, Amitermitinae, Nasutitermitinae and Termitinae. Amitermitinae had three species namely, Microcerotermes fletcheri, M. pakistanicus and Speculitermes sinhalensis. Macrotermitinae had the highest number of species belonging two genera, namely Microtermes and Odontotermes. Microtermes genus was with only one species i.e., Microtermes obesi. The genus Odontotermes was represented by the highest number of species (07), namely, O. obesus was the dominant (41 per cent) and the most widely distributed species in Karnataka, Karala, Maharashtra and Tamil nadu followed by O. feae (19 per cent), O.assmuthi (11 per cent) and others like O. bellahunisensis O. horni O. redemanni, O. yadevi. Nasutitermitinae was represented by two genera namely Nasutitermes anamalaiensis and Trinervitermes biformis. Termitinae subfamily was represented by Labiocapritermes distortus. Rhinotermitidae was represented by single subfamily Heterotermetinae. In Heterotermetinae, two species namely Heterotermes balwanthi and H. malabaricus were recorded. Genetic relationship among termites collected from various locations of Western Ghats of India was characterized based on mitochondrial DNA sequences (12S, 16S, and COII). Sequence analysis and divergence among the species was assessed. These results suggest that the use of both molecular and morphological approaches is crucial in ensuring accurate species identification. Efforts were made to understand their evolution and to address the ambiguities in morphological taxonomy. The implication of the study in revising the taxonomy of Indian termites, their characterization and molecular comparisons between the sequences are discussed.

Keywords: isoptera, mitochondrial DNA sequences, rhinotermitidae, termitidae, Western ghats

Procedia PDF Downloads 247
394 Interacting with Multi-Scale Structures of Online Political Debates by Visualizing Phylomemies

Authors: Quentin Lobbe, David Chavalarias, Alexandre Delanoe

Abstract:

The ICT revolution has given birth to an unprecedented world of digital traces and has impacted a wide number of knowledge-driven domains such as science, education or policy making. Nowadays, we are daily fueled by unlimited flows of articles, blogs, messages, tweets, etc. The internet itself can thus be considered as an unsteady hyper-textual environment where websites emerge and expand every day. But there are structures inside knowledge. A given text can always be studied in relation to others or in light of a specific socio-cultural context. By way of their textual traces, human beings are calling each other out: hypertext citations, retweets, vocabulary similarity, etc. We are in fact the architects of a giant web of elements of knowledge whose structures and shapes convey their own information. The global shapes of these digital traces represent a source of collective knowledge and the question of their visualization remains an opened challenge. How can we explore, browse and interact with such shapes? In order to navigate across these growing constellations of words and texts, interdisciplinary innovations are emerging at the crossroad between fields of social and computational sciences. In particular, complex systems approaches make it now possible to reconstruct the hidden structures of textual knowledge by means of multi-scale objects of research such as semantic maps and phylomemies. The phylomemy reconstruction is a generic method related to the co-word analysis framework. Phylomemies aim to reveal the temporal dynamics of large corpora of textual contents by performing inter-temporal matching on extracted knowledge domains in order to identify their conceptual lineages. This study aims to address the question of visualizing the global shapes of online political discussions related to the French presidential and legislative elections of 2017. We aim to build phylomemies on top of a dedicated collection of thousands of French political tweets enriched with archived contemporary news web articles. Our goal is to reconstruct the temporal evolution of online debates fueled by each political community during the elections. To that end, we want to introduce an iterative data exploration methodology implemented and tested within the free software Gargantext. There we combine synchronic and diachronic axis of visualization to reveal the dynamics of our corpora of tweets and web pages as well as their inner syntagmatic and paradigmatic relationships. In doing so, we aim to provide researchers with innovative methodological means to explore online semantic landscapes in a collaborative and reflective way.

Keywords: online political debate, French election, hyper-text, phylomemy

Procedia PDF Downloads 165
393 Islam and Democracy: A Paradoxical Study of Syed Maududi and Javed Ghamidi

Authors: Waseem Makai

Abstract:

The term ‘political Islam’ now seem to have gained the centre stage in every discourse pertaining to Islamic legitimacy and compatibility in modern civilisations. A never ceasing tradition of the philosophy of caliphate that has kept overriding the options of any alternate political institution in the Muslim world still permeates a huge faction of believers. Fully accustomed with the proliferation of changes and developments in individual, social and natural dispositions of the world, Islamic theologians retaliated to this flux through both conventional and modernist approaches. The so-called conventional approach was quintessential of the interpretations put forth by Syed Maududi, with new comprehensive, academic and powerful vigour, as never seen before. He generated the avant-garde scholarship which would bear testimony to his statements, made to uphold the political institution of Islam as supreme and noble. However, it was not his trait to challenge the established views but to codify them in such a bracket which a man of the 20th century would find captivating to his heart and satisfactory to his rationale. The delicate microcosms like selection of a caliph, implementation of Islamic commandments (Sharia), interest free banking sectors, imposing tax (Jazyah) on non-believers, waging the holy crusade (Jihad) for the expansion of Islamic boundaries, stoning for committing adulteration and capital punishment for apostates were all there in his scholarship which he spent whole of his life defending in the best possible manner. What and where did he went wrong with all this, was supposedly to be notified later, by his once been disciple, Javed Ahmad Ghamidi. Ghamidi is being accused of struggling between Scylla and Charybdis as he tries to remain steadfast to his basic Islamic tenets while modernising their interpretations to bring them in harmony with the Western ideals of democracy and liberty. His blatant acknowledgement of putting democracy at a high pedestal, calling the implementation of Sharia a non-mandatory task and denial to bracket people in the categories of Zimmi and Kaafir fully vindicates his stance against conventional narratives like that of Syed Maududi. Ghamidi goes to the extent of attributing current forms of radicalism and extremism, as exemplified in the operations of organisations like ISIS in Iraq and Syria and Tehreek-e-Taliban in Pakistan, to such a version of political Islam as upheld not only by Syed Maududi but by other prominent theologians like Ibn-Timyah, Syed Qutub and Dr. Israr Ahmad also. Ghamidi is wretched, in a way that his allegedly insubstantial claims gained him enough hostilities to leave his homeland when two of his close allies were brutally murdered. Syed Maududi and Javed Ghamidi, both stand poles apart in their understanding of Islam and its political domain. Who has the appropriate methodology, scholarship and execution in his mode of comprehension, is an intriguing task, worth carrying out in detail.

Keywords: caliphate, democracy, ghamidi, maududi

Procedia PDF Downloads 173
392 Valorisation of Food Waste Residue into Sustainable Bioproducts

Authors: Krishmali N. Ekanayake, Brendan J. Holland, Colin J. Barrow, Rick Wood

Abstract:

Globally, more than one-third of all food produced is lost or wasted, equating to 1.3 billion tonnes per year. Around 31.2 million tonnes of food waste are generated across the production, supply, and consumption chain in Australia. Generally, the food waste management processes adopt environmental-friendly and more sustainable approaches such as composting, anerobic digestion and energy implemented technologies. However, unavoidable, and non-recyclable food waste ends up as landfilling and incineration that involve many undesirable impacts and challenges on the environment. A biorefinery approach contributes to a waste-minimising circular economy by converting food and other organic biomass waste into valuable outputs, including feeds, nutrition, fertilisers, and biomaterials. As a solution, Green Eco Technologies has developed a food waste treatment process using WasteMaster system. The system uses charged oxygen and moderate temperatures to convert food waste, without bacteria, additives, or water, into a virtually odour-free, much reduced quantity of reusable residual material. In the context of a biorefinery, the WasteMaster dries and mills food waste into a form suitable for storage or downstream extraction/separation/concentration to create products. The focus of the study is to determine the nutritional composition of WasteMaster processed residue to potential develop aquafeed ingredients. The global aquafeed industry is projected to reach a high value market in future, which has shown high demand for the aquafeed products. Therefore, food waste can be utilized for aquaculture feed development by reducing landfill. This framework will lessen the requirement of raw crops cultivation for aquafeed development and reduce the aquaculture footprint. In the present study, the nutritional elements of processed residue are consistent with the input food waste type, which has shown that the WasteMaster is not affecting the expected nutritional distribution. The macronutrient retention values of protein, lipid, and nitrogen free extract (NFE) are detected >85%, >80%, and >95% respectively. The sensitive food components including omega 3 and omega 6 fatty acids, amino acids, and phenolic compounds have been found intact in each residue material. Preliminary analysis suggests a price comparability with current aquafeed ingredient cost making the economic feasibility. The results suggest high potentiality of aquafeed development as 5 to 10% of the ingredients to replace/partially substitute other less sustainable ingredients across biorefinery setting. Our aim is to improve the sustainability of aquaculture and reduce the environmental impacts of food waste.

Keywords: biorefinery, ffood waste residue, input, wasteMaster

Procedia PDF Downloads 35
391 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities

Authors: Shaurya Chauhan, Sagar Gupta

Abstract:

Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.

Keywords: open source, public participation, urbanization, urban development

Procedia PDF Downloads 120
390 STEM (Science–Technology–Engineering–Mathematics) Based Entrepreneurship Training, Within a Learning Company

Authors: Diana Mitova, Krassimir Mitrev

Abstract:

To prepare the current generation for the future, education systems need to change. It implies a way of learning that meets the demands of the times and the environment in which we live. Productive interaction in the educational process implies an interactive learning environment and the possibility of personal development of learners based on communication and mutual dialogue, cooperation and good partnership in decision-making. Students need not only theoretical knowledge, but transferable skills that will help them to become inventors and entrepreneurs, to implement ideas. STEM education , is now a real necessity for the modern school. Through learning in a "learning company", students master examples from classroom practice, simulate real life situations, group activities and apply basic interactive learning strategies and techniques. The learning company is the subject of this study, reduced to entrepreneurship training in STEM - technologies that encourage students to think outside the traditional box. STEM learning focuses the teacher's efforts on modeling entrepreneurial thinking and behavior in students and helping them solve problems in the world of business and entrepreneurship. Learning based on the implementation of various STEM projects in extracurricular activities, experiential learning, and an interdisciplinary approach are means by which educators better connect the local community and private businesses. Learners learn to be creative, experiment and take risks and work in teams - the leading characteristics of any innovator and future entrepreneur. This article presents some European policies on STEM and entrepreneurship education. It also shares best practices for training company training , with the integration of STEM in the learning company training environment. The main results boil down to identifying some advantages and problems in STEM entrepreneurship education. The benefits of using integrative approaches to teach STEM within a training company are identified, as well as the positive effects of project-based learning in a training company using STEM. Best practices for teaching entrepreneurship through extracurricular activities using STEM within a training company are shared. The following research methods are applied in this research paper: Theoretical and comparative analysis of principles and policies of European Union countries and Bulgaria in the field of entrepreneurship education through a training company. Experiences in entrepreneurship education through extracurricular activities with STEM application within a training company are shared. A questionnaire survey to investigate the motivation of secondary vocational school students to learn entrepreneurship through a training company and their readiness to start their own business after completing their education. Within the framework of learning through a "learning company" with the integration of STEM, the activity of the teacher-facilitator includes the methods: counseling, supervising and advising students during work. The expectation is that students acquire the key competence "initiative and entrepreneurship" and that the cooperation between the vocational education system and the business in Bulgaria is more effective.

Keywords: STEM, entrepreneurship, training company, extracurricular activities

Procedia PDF Downloads 77
389 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 97
388 Valorization of Banana Peels for Mercury Removal in Environmental Realist Conditions

Authors: E. Fabre, C. Vale, E. Pereira, C. M. Silva

Abstract:

Introduction: Mercury is one of the most troublesome toxic metals responsible for the contamination of the aquatic systems due to its accumulation and bioamplification along the food chain. The 2030 agenda for sustainable development of United Nations promotes the improving of water quality by reducing water pollution and foments an enhance in wastewater treatment, encouraging their recycling and safe water reuse globally. Sorption processes are widely used in wastewater treatments due to their many advantages such as high efficiency and low operational costs. In these processes the target contaminant is removed from the solution by a solid sorbent. The more selective and low cost is the biosorbent the more attractive becomes the process. Agricultural wastes are especially attractive approaches for sorption. They are largely available, have no commercial value and require little or no processing. In this work, banana peels were tested for mercury removal from low concentrated solutions. In order to investigate the applicability of this solid, six water matrices were used increasing the complexity from natural waters to a real wastewater. Studies of kinetics and equilibrium were also performed using the most known models to evaluate the viability of the process In line with the concept of circular economy, this study adds value to this by-product as well as contributes to liquid waste management. Experimental: The solutions were prepared with Hg(II) initial concentration of 50 µg L-1 in natural waters, at 22 ± 1 ºC, pH 6, magnetically stirring at 650 rpm and biosorbent mass of 0.5 g L-1. NaCl was added to obtain the salt solutions, seawater was collected from the Portuguese coast and the real wastewater was kindly provided by ISQ - Instituto de Soldadura e qualidade (Welding and Quality Institute) and diluted until the same concentration of 50 µg L-1. Banana peels were previously freeze-drying, milled, sieved and the particles < 1 mm were used. Results: Banana peels removed more than 90% of Hg(II) from all the synthetic solutions studied. In these cases, the enhance in the complexity of the water type promoted a higher mercury removal. In salt waters, the biosorbent showed removals of 96%, 95% and 98 % for 3, 15 and 30 g L-1 of NaCl, respectively. The residual concentration of Hg(II) in solution achieved the level of drinking water regulation (1 µg L-1). For real matrices, the lower Hg(II) elimination (93 % for seawater and 81 % for the real wastewaters), can be explained by the competition between the Hg(II) ions and the other elements present in these solutions for the sorption sites. Regarding the equilibrium study, the experimental data are better described by the Freundlich isotherm (R ^ 2=0.991). The Elovich equation provided the best fit to the kinetic points. Conclusions: The results exhibited the great ability of the banana peels to remove mercury. The environmental realist conditions studied in this work, highlight their potential usage as biosorbents in water remediation processes.

Keywords: banana peels, mercury removal, sorption, water treatment

Procedia PDF Downloads 132