Search results for: community retail drug distribution points
698 STEM (Science–Technology–Engineering–Mathematics) Based Entrepreneurship Training, Within a Learning Company
Authors: Diana Mitova, Krassimir Mitrev
Abstract:
To prepare the current generation for the future, education systems need to change. It implies a way of learning that meets the demands of the times and the environment in which we live. Productive interaction in the educational process implies an interactive learning environment and the possibility of personal development of learners based on communication and mutual dialogue, cooperation and good partnership in decision-making. Students need not only theoretical knowledge, but transferable skills that will help them to become inventors and entrepreneurs, to implement ideas. STEM education , is now a real necessity for the modern school. Through learning in a "learning company", students master examples from classroom practice, simulate real life situations, group activities and apply basic interactive learning strategies and techniques. The learning company is the subject of this study, reduced to entrepreneurship training in STEM - technologies that encourage students to think outside the traditional box. STEM learning focuses the teacher's efforts on modeling entrepreneurial thinking and behavior in students and helping them solve problems in the world of business and entrepreneurship. Learning based on the implementation of various STEM projects in extracurricular activities, experiential learning, and an interdisciplinary approach are means by which educators better connect the local community and private businesses. Learners learn to be creative, experiment and take risks and work in teams - the leading characteristics of any innovator and future entrepreneur. This article presents some European policies on STEM and entrepreneurship education. It also shares best practices for training company training , with the integration of STEM in the learning company training environment. The main results boil down to identifying some advantages and problems in STEM entrepreneurship education. The benefits of using integrative approaches to teach STEM within a training company are identified, as well as the positive effects of project-based learning in a training company using STEM. Best practices for teaching entrepreneurship through extracurricular activities using STEM within a training company are shared. The following research methods are applied in this research paper: Theoretical and comparative analysis of principles and policies of European Union countries and Bulgaria in the field of entrepreneurship education through a training company. Experiences in entrepreneurship education through extracurricular activities with STEM application within a training company are shared. A questionnaire survey to investigate the motivation of secondary vocational school students to learn entrepreneurship through a training company and their readiness to start their own business after completing their education. Within the framework of learning through a "learning company" with the integration of STEM, the activity of the teacher-facilitator includes the methods: counseling, supervising and advising students during work. The expectation is that students acquire the key competence "initiative and entrepreneurship" and that the cooperation between the vocational education system and the business in Bulgaria is more effective.Keywords: STEM, entrepreneurship, training company, extracurricular activities
Procedia PDF Downloads 96697 Women’s Experience of Managing Pre-Existing Lymphoedema during Pregnancy and the Early Postnatal Period
Authors: Kim Toyer, Belinda Thompson, Louise Koelmeyer
Abstract:
Lymphoedema is a chronic condition caused by dysfunction of the lymphatic system, which limits the drainage of fluid and tissue waste from the interstitial space of the affected body part. The normal physiological changes in pregnancy cause an increased load on a normal lymphatic system which can result in a transient lymphatic overload (oedema). The interaction between lymphoedema and pregnancy oedema is unclear. Women with pre-existing lymphoedema require accurate information and additional strategies to manage their lymphoedema during pregnancy. Currently, no resources are available to guide women or their healthcare providers with accurate advice and additional management strategies for coping with lymphoedema during pregnancy until they have recovered postnatally. This study explored the experiences of Australian women with pre-existing lymphoedema during recent pregnancy and the early postnatal period to determine how their usual lymphoedema management strategies were adapted and what were their additional or unmet needs. Interactions with their obstetric care providers, the hospital maternity services, and usual lymphoedema therapy services were detailed. Participants were sourced from several Australian lymphoedema community groups, including therapist networks. Opportunistic sampling is appropriate to explore this topic in a small target population as lymphoedema in women of childbearing age is uncommon, with prevalence data unavailable. Inclusion criteria were aged over 18 years, diagnosed with primary or secondary lymphoedema of the arm or leg, pregnant within the preceding ten years (since 2012), and had their pregnancy and postnatal care in Australia. Exclusion criteria were a diagnosis of lipedema and if unable to read or understand a reasonable level of English. A mixed-method qualitative design was used in two phases. This involved an online survey (REDCap platform) of the participants followed by online semi-structured interviews or focus groups to provide the transcript data for inductive thematic analysis to gain an in-depth understanding of issues raised. Women with well-managed pre-existing lymphoedema coped well with the additional oedema load of pregnancy; however, those with limited access to quality conservative care prior to pregnancy were found to be significantly impacted by pregnancy, including many reporting deterioration of their chronic lymphoedema. Misinformation and a lack of support increased fear and apprehension in planning and enjoying their pregnancy experience. Collaboration between maternity and lymphoedema therapy services did not happen despite study participants suggesting it. Helpful resources and unmet needs were identified in the recent Australian context to inform further research and the development of resources to assist women with lymphoedema who are considering or are pregnant and their supporters, including health care providers.Keywords: lymphoedema, management strategies, pregnancy, qualitative
Procedia PDF Downloads 85696 Isolation, Selection and Identification of Bacteria for Bioaugmentation of Paper Mills White Water
Authors: Nada Verdel, Tomaz Rijavec, Albin Pintar, Ales Lapanje
Abstract:
Objectives: White water circuits of woodfree paper mills contain suspended, dissolved, and colloidal particles, such as cellulose, starch, paper sizings, and dyes. By closing the white water circuits, these particles start to accumulate and affect the production. Due to high amount of organic matter that scavenge radicals and adsorbs onto catalyst surfaces, treatment of white water with photocatalysis is inappropriate. The most suitable approach should be bioaugmentation-assisted bioremediation. Accordingly, objectives were: - to isolate bacteria capable of degrading organic compounds used for the papermaking process - to select the most active bacteria for bioaugmentation. Status: The state-of-the-art of bioaugmentation of pulp and paper mill effluents is mostly based on biodegradation of lignin. Whereas in white water circuits of woodfree paper mills only papermaking compounds are present. As far as one can tell from the literature, the study on degradation activities of bacteria for all possible compounds of the papermaking process is a novelty. Methodology: The main parameters of the selected white water were systematically analyzed during a period of two months. Bacteria were isolated on selective media with particular carbon source. Organic substances used as carbon source either enter white water circuits as base paper or as recycled broke. The screening of bacterial activities for starch, cellulose, latex, polyvinyl alcohol, alkyl ketene dimers, and resin acids was followed by addition of lugol. Degraders of polycyclic aromatic dyes were selected by cometabolism tests; cometabolism is simultaneous biodegradation of two compounds, in which the degradation of the second compound depends on the presence of the first. The obtained strains were identified by 16S rRNA sequencing. Findings: 335 autochthonous strains were isolated on plates with selected carbon source. The isolated strains were selected according to degradation of the particular carbon source. The ultimate degraders of cationic starch, cellulose, and sizings are Pseudomonas sp. NV-CE12-CF and Aeromonas sp. NV-RES19-BTP. The most active strains capable of degrading azo dyes are Aeromonas sp. NV-RES19-BTP and Sphingomonas sp. NV-B14-CF. Klebsiella sp. NV-Y14A-BTP degrade polycyclic aromatic direct blue 15 and also yellow dye, Agromyces sp. NV-RED15A-BF and Cellulosimicrobium sp. NV-A4-BF are specialists for whitener and Aeromonas sp. NV-RES19-BTP is general degrader of all compounds. To the white water adapted bacteria were isolated and selected according to their degradation activities for particular organic substances. Mostly isolated bacteria are specialized to lower the competition in the microbial community. Degraders of readily-biodegradable compounds do not degrade recalcitrant polycyclic aromatic dyes and vice versa. General degraders are rare.Keywords: bioaugmentation, biodegradation of azo dyes, cometabolism, smart wastewater treatment technologies
Procedia PDF Downloads 203695 Revolutionizing Oil Palm Replanting: Geospatial Terrace Design for High-precision Ground Implementation Compared to Conventional Methods
Authors: Nursuhaili Najwa Masrol, Nur Hafizah Mohammed, Nur Nadhirah Rusyda Rosnan, Vijaya Subramaniam, Sim Choon Cheak
Abstract:
Replanting in oil palm cultivation is vital to enable the introduction of planting materials and provides an opportunity to improve the road, drainage, terrace design, and planting density. Oil palm replanting is fundamentally necessary every 25 years. The adoption of the digital replanting blueprint is imperative as it can assist the Malaysia Oil Palm industry in addressing challenges such as labour shortages and limited expertise related to replanting tasks. Effective replanting planning should commence at least 6 months prior to the actual replanting process. Therefore, this study will help to plan and design the replanting blueprint with high-precision translation on the ground. With the advancement of geospatial technology, it is now feasible to engage in thoroughly researched planning, which can help maximize the potential yield. A blueprint designed before replanting is to enhance management’s ability to optimize the planting program, address manpower issues, or even increase productivity. In terrace planting blueprints, geographic tools have been utilized to design the roads, drainages, terraces, and planting points based on the ARM standards. These designs are mapped with location information and undergo statistical analysis. The geospatial approach is essential in precision agriculture and ensuring an accurate translation of design to the ground by implementing high-accuracy technologies. In this study, geospatial and remote sensing technologies played a vital role. LiDAR data was employed to determine the Digital Elevation Model (DEM), enabling the precise selection of terraces, while ortho imagery was used for validation purposes. Throughout the designing process, Geographical Information System (GIS) tools were extensively utilized. To assess the design’s reliability on the ground compared with the current conventional method, high-precision GPS instruments like EOS Arrow Gold and HIPER VR GNSS were used, with both offering accuracy levels between 0.3 cm and 0.5cm. Nearest Distance Analysis was generated to compare the design with actual planting on the ground. The analysis revealed that it could not be applied to the roads due to discrepancies between actual roads and the blueprint design, which resulted in minimal variance. In contrast, the terraces closely adhered to the GPS markings, with the most variance distance being less than 0.5 meters compared to actual terraces constructed. Considering the required slope degrees for terrace planting, which must be greater than 6 degrees, the study found that approximately 65% of the terracing was constructed at a 12-degree slope, while over 50% of the terracing was constructed at slopes exceeding the minimum degrees. Utilizing blueprint replanting promising strategies for optimizing land utilization in agriculture. This approach harnesses technology and meticulous planning to yield advantages, including increased efficiency, enhanced sustainability, and cost reduction. From this study, practical implementation of this technique can lead to tangible and significant improvements in agricultural sectors. In boosting further efficiencies, future initiatives will require more sophisticated techniques and the incorporation of precision GPS devices for upcoming blueprint replanting projects besides strategic progression aims to guarantee the precision of both blueprint design stages and its subsequent implementation on the field. Looking ahead, automating digital blueprints are necessary to reduce time, workforce, and costs in commercial production.Keywords: replanting, geospatial, precision agriculture, blueprint
Procedia PDF Downloads 82694 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal
Authors: A. D. Rao, Sachiko Mohanty
Abstract:
The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal
Procedia PDF Downloads 170693 Use of Cassava Waste and Its Energy Potential
Authors: I. Inuaeyen, L. Phil, O. Eni
Abstract:
Fossil fuels have been the main source of global energy for many decades, accounting for about 80% of global energy need. This is beginning to change however with increasing concern about greenhouse gas emissions which comes mostly from fossil fuel combustion. Greenhouse gases such as carbon dioxide are responsible for stimulating climate change. As a result, there has been shift towards more clean and renewable energy sources of energy as a strategy for stemming greenhouse gas emission into the atmosphere. The production of bio-products such as bio-fuel, bio-electricity, bio-chemicals, and bio-heat etc. using biomass materials in accordance with the bio-refinery concept holds a great potential for reducing high dependence on fossil fuel and their resources. The bio-refinery concept promotes efficient utilisation of biomass material for the simultaneous production of a variety of products in order to minimize or eliminate waste materials. This will ultimately reduce greenhouse gas emissions into the environment. In Nigeria, cassava solid waste from cassava processing facilities has been identified as a vital feedstock for bio-refinery process. Cassava is generally a staple food in Nigeria and one of the most widely cultivated foodstuff by farmers across Nigeria. As a result, there is an abundant supply of cassava waste in Nigeria. In this study, the aim is to explore opportunities for converting cassava waste to a range of bio-products such as butanol, ethanol, electricity, heat, methanol, furfural etc. using a combination of biochemical, thermochemical and chemical conversion routes. . The best process scenario will be identified through the evaluation of economic analysis, energy efficiency, life cycle analysis and social impact. The study will be carried out by developing a model representing different process options for cassava waste conversion to useful products. The model will be developed using Aspen Plus process simulation software. Process economic analysis will be done using Aspen Icarus software. So far, comprehensive survey of literature has been conducted. This includes studies on conversion of cassava solid waste to a variety of bio-products using different conversion techniques, cassava waste production in Nigeria, modelling and simulation of waste conversion to useful products among others. Also, statistical distribution of cassava solid waste production in Nigeria has been established and key literatures with useful parameters for developing different cassava waste conversion process has been identified. In the future work, detailed modelling of the different process scenarios will be carried out and the models validated using data from literature and demonstration plants. A techno-economic comparison of the various process scenarios will be carried out to identify the best scenario using process economics, life cycle analysis, energy efficiency and social impact as the performance indexes.Keywords: bio-refinery, cassava waste, energy, process modelling
Procedia PDF Downloads 373692 Vertical Village Buildings as Sustainable Strategy to Re-Attract Mega-Cities in Developing Countries
Authors: M. J. Eichner, Y. S. Sarhan
Abstract:
Overall study purpose has been the evaluation of ‘Vertical Villages’ as a new sustainable building typology, reducing significantly negative impacts of rapid urbanization processes in third world capital cities. Commonly in fast-growing cities, housing and job supply, educational and recreational opportunities, as well as public transportation infrastructure, are not accommodating rapid population growth, exposing people to high noise and emission polluted living environments with low-quality neighborhoods and a lack of recreational areas. Like many others, Egypt’s capital city Cairo, according to the UN facing annual population growth rates of up to 428.000 people, is struggling to address the general deterioration of urban living conditions. New settlements typologies and urban reconstruction approach hardly follow sustainable urbanization principles or socio-ecologic urbanization models with severe effects not only for inhabitants but also for the local environment and global climate. The authors prove that ‘Vertical Village’ buildings can offer a sustainable solution for increasing urban density with at the same time improving the living quality and urban environment significantly. Inserting them within high-density urban fabrics the ecologic and socio-cultural conditions of low-quality neighborhoods can be transformed towards districts, considering all needs of sustainable and social urban life. This study analyzes existing building typologies in Cairo’s «low quality - high density» districts Ard el Lewa, Dokki and Mohandesen according to benchmarks for sustainable residential buildings, identifying major problems and deficits. In 3 case study design projects, the sustainable transformation potential through ‘Vertical Village’ buildings are laid out and comparative studies show the improvement of the urban microclimate, safety, social diversity, sense of community, aesthetics, privacy, efficiency, healthiness and accessibility. The main result of the paper is that the disadvantages of density and overpopulation in developing countries can be converted with ‘Vertical Village’ buildings into advantages, achieving attractive and environmentally friendly living environments with multiple synergies. The paper is documenting based on scientific criteria that mixed-use vertical building structures, designed according to sustainable principles of low rise housing, can serve as an alternative to convert «low quality - high density» districts in megacities, opening a pathway for governments to achieve sustainable urban transformation goals. Neglected informal urban districts, home to millions of the poorer population groups, can be converted into healthier living and working environments.Keywords: sustainable, architecture, urbanization, urban transformation, vertical village
Procedia PDF Downloads 124691 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms
Authors: Dimitrios Kafetzopoulos
Abstract:
Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.Keywords: incremental operational changes, radical operational changes, efficiency, sustainability
Procedia PDF Downloads 136690 Data Envelopment Analysis of Allocative Efficiency among Small-Scale Tuber Crop Farmers in North-Central, Nigeria
Authors: Akindele Ojo, Olanike Ojo, Agatha Oseghale
Abstract:
The empirical study examined the allocative efficiency of small holder tuber crop farmers in North central, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 300 randomly selected tuber crop farmers from the study area. Descriptive statistics, data envelopment analysis and Tobit regression model were used to analyze the data. The DEA result on the classification of the farmers into efficient and inefficient farmers showed that 17.67% of the sampled tuber crop farmers in the study area were operating at frontier and optimum level of production with mean allocative efficiency of 1.00. This shows that 82.33% of the farmers in the study area can still improve on their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Tobit model for factors influencing allocative inefficiency in the study area showed that as the year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size increased in the study area, the allocative inefficiency of the farmers decreased. The results on effects of the significant determinants of allocative inefficiency at various distribution levels revealed that allocative efficiency increased from 22% to 34% as the farmer acquired more farming experience. The allocative efficiency index of farmers that belonged to cooperative society was 0.23 while their counterparts without cooperative society had index value of 0.21. The result also showed that allocative efficiency increased from 0.43 as farmer acquired high formal education and decreased to 0.16 with farmers with non-formal education. The efficiency level in the allocation of resources increased with more contact with extension services as the allocative efficeincy index increased from 0.16 to 0.31 with frequency of extension contact increasing from zero contact to maximum of twenty contacts per annum. These results confirm that increase in year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size leads to increases efficiency. The results further show that the age of the farmers had 32% input to the efficiency but reduces to an average of 15%, as the farmer grows old. It is therefore recommended that enhanced research, extension delivery and farm advisory services should be put in place for farmers who did not attain optimum frontier level to learn how to attain the remaining 74.39% level of allocative efficiency through a better production practices from the robustly efficient farms. This will go a long way to increase the efficiency level of the farmers in the study area.Keywords: allocative efficiency, DEA, Tobit regression, tuber crop
Procedia PDF Downloads 289689 Evaluation of Role of Surgery in Management of Pediatric Germ Cell Tumors According to Risk Adapted Therapy Protocols
Authors: Ahmed Abdallatif
Abstract:
Background: Patients with malignant germ cell tumors have age distribution in two peaks, with the first one during infancy and the second after the onset of puberty. Gonadal germ cell tumors are the most common malignant ovarian tumor in females aged below twenty years. Sacrococcygeal and retroperitoneal abdominal tumors usually presents in a large size before the onset of symptoms. Methods: Patients with pediatric germ cell tumors presenting to Children’s Cancer Hospital Egypt and National Cancer Institute Egypt from January 2008 to June 2011 Patients underwent stratification according to risk into low, intermediate and high risk groups according to children oncology group classification. Objectives: Assessment of the clinicopathologic features of all cases of pediatric germ cell tumors and classification of malignant cases according to their stage, and the primary site to low, intermediate and high risk patients. Evaluation of surgical management in each group of patients focusing on surgical approach, the extent of surgical resection according to each site, ability to achieve complete surgical resection and perioperative complications. Finally, determination of the three years overall and disease-free survival in different groups and the relation to different prognostic factors including the extent of surgical resection. Results: Out of 131 cases surgically explored only 26 cases had re exploration with 8 cases explored for residual disease 9 cases for remote recurrence or metastatic disease and the other 9 cases for other complications. Patients with low risk kept under follow up after surgery, out of those of low risk group (48 patients) only 8 patients (16.5%) shifted to intermediate risk. There were 20 patients (14.6%) diagnosed as intermediate risk received 3 cycles of compressed (Cisplatin, Etoposide and Bleomycin) and all high risk group patients 69patients (50.4%) received chemotherapy. Stage of disease was strongly and significantly related to overall survival with a poorer survival in late stages (stage IV) as compared to earlier stages. Conclusion: Overall survival rate at 3 three years was (76.7% ± 5.4, 3) years EFS was (77.8 % ±4.0), however 3 years DFS was much better (89.8 ± 3.4) in whole study group with ovarian tumors had significantly higher Overall survival (90% ± 5.1). Event Free Survival analysis showed that Male gender was 3 times likely to have bad events than females. Patients who underwent incomplete resection were 4 times more than patients with complete resection to have bad events. Disease free survival analysis showed that Patients who underwent incomplete surgery were 18.8 times liable for recurrence compared to those who underwent complete surgery, and patients who were exposed to re-excision were 21 times more prone to recurrence compared to other patients.Keywords: extragonadal, germ cell tumors, gonadal, pediatric
Procedia PDF Downloads 218688 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 142687 OER on Academic English, Educational Research and ICT Literacy, Promoting International Graduate Programs in Thailand
Authors: Maturos Chongchaikit, Sitthikorn Sumalee, Nopphawan Chimroylarp, Nongluck Manowaluilou, Thapanee Thammetha
Abstract:
The 2015 Kasetsart University Research Plan, which was funded by the National Research Institutes: TRF – NRCT, comprises four sub-research projects on the development of three OER websites and on their usage study by students in international programs. The goals were to develop the open educational resources (OER) in the form of websites that will promote three key skills of quality learning and achievement: Academic English, Educational Research, and ICT Literacy, to graduate students in international programs of Thailand. The statistics from the Office of Higher Education showed that the number of foreign students who come to study in international higher education of Thailand has increased respectively by 25 percent per year, proving that the international education system and institutes of Thailand have been already recognized regionally and globally as meeting the standards. The output of the plan: the OER websites and their materials, and the outcome: students’ learning improvement due to lecturers’ readiness for open educational media, will ultimately lead the country to higher business capabilities for international education services in ASEAN Community in the future. The OER innovation is aimed at sharing quality knowledge to the world, with the adoption of Creative Commons Licenses that makes sharing be able to do freely (5Rs openness), without charge and leading to self and life-long learning. The research has brought the problems on the low usage of existing OER in the English language to develop the OER on three specific skills and try them out with the sample of 100 students randomly selected from the international graduate programs of top 10 Thai universities, according to QS Asia University Rankings 2014. The R&D process was used for product evaluation in 2 stages: the development stage and the usage study stage. The research tools were the questionnaires for content and OER experts, the questionnaires for the sample group and the open-ended interviews for the focus group discussions. The data were analyzed using frequency, percentage, mean and SD. The findings revealed that the developed websites were fully qualified as OERs by the experts. The students’ opinions and satisfaction were at the highest levels for both the content and the technology used for presentation. The usage manual and self-assessment guide were finalized during the focus group discussions. The direct participation according to the concept of 5Rs Openness Activities through the provided tools of OER models like MERLOT and OER COMMONS, as well as the development of usage manual and self-assessment guide, were revealed as a key approach to further extend the output widely and sustainably to the network of users in various higher education institutions.Keywords: open educational resources, international education services business, academic English, educational research, ICT literacy, international graduate program, OER
Procedia PDF Downloads 222686 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 418685 Survey of the Literacy by Radio Project as an Innovation in Literacy Promotion in Nigeria
Authors: Stella Chioma Nwizu
Abstract:
The National Commission for Adult and Non Formal Education (NMEC) in Nigeria is charged with the reduction of illiteracy rate through the development, monitoring, and supervision of literacy programmes in Nigeria. In spite of various efforts by NMEC to reduce illiteracy, literature still shows that the illiteracy rate is still high. According to NMEC/UNICEF, about 60 million Nigerians are non-literate, and nearly two thirds of them are women. This situation forced the government to search for innovative and better approaches to literacy promotion and delivery. The literacy by radio project was adopted as an innovative intervention to literacy delivery in Nigeria because the radio is the cheapest and most easily affordable medium for non-literates. The project aimed at widening access to literacy programmes for the non-literate marginalized and disadvantaged groups in Nigeria by taking literacy programmes to their door steps. The literacy by radio has worked perfectly well in non-literacy reduction in Cuba. This innovative intervention of literacy by radio is anchored on the diffusion of innovation theory by Rogers. The literacy by radio has been going on for fifteen years and the efficacy and contributions of this innovation need to be investigated. Thus, the purpose of this research is to review the contributions of the literacy by radio in Nigeria. The researcher adopted the survey research design for the study. The population for the study consisted of 2,706 participants and 47 facilitators of the literacy by radio programme in the 10 pilot states in Nigeria. A sample of four states made up of 302 participants and eight facilitators were used for the study. Information was collected through Focus Group Discussion (FGD), interviews and content analysis of official documents. The data were analysed qualitatively to review the contributions of literacy by radio project and determine the efficacy of this innovative approach in facilitating literacy in Nigeria. Results from the field experience showed, among others, that more non-literates have better access to literacy programmes through this innovative approach. The pilot project was 88% successful; not less than 2,110 adults were made literate through the literacy by radio project in 2017. However, lack of enthusiasm and commitment on the part of the technical committee and facilitators due to non-payment of honorarium, poor signals from radio stations, interruption of lectures with adverts, low community involvement in decision making in the project are challenges to the success rate of the project. The researcher acknowledges the need to customize all materials and broadcasts in all the dialects of the participants and the inclusion of more civil rights, environmental protection and agricultural skills into the project. The study recommends among others, improved and timely funding of the project by the Federal Government to enable NMEC to fulfill her obligations towards the greater success of the programme, setting up of independent radio stations for airing the programmes and proper monitoring and evaluation of the project by NMEC and State Agencies for greater effectiveness. In an era of the knowledge-driven economy, no one should be allowed to get saddled with the weight of illiteracy.Keywords: innovative approach, literacy, project, radio, survey
Procedia PDF Downloads 65684 Heat Stress a Risk Factor for Poor Maternal Health- Evidence from South India
Authors: Vidhya Venugopal, Rekha S.
Abstract:
Introduction: Climate change and the growing frequency of higher average temperatures and heat waves have detrimental health effects, especially for certain vulnerable groups with limited socioeconomic status (SES) or physiological capacity to adapt to or endure high temperatures. Little research has been conducted on the effects of heat stress on pregnant women and fetuses in tropical regions such as India. Very high ambient temperatures may worsen Adverse Pregnancy Outcomes (APOs) and are a major worry in the scenario of climate change. The relationship between rising temperatures and APO must be better understood in order to design more effective interventions. Methodology: We conducted an observational cohort study involving 865 pregnant women in various districts of Tamil Nadu districts between 2014 and 2021. Physiological Heat Strain Indicators (HSI) such as morning and evening Core Body Temperature (CBT) and Urine Specific Gravity (USG) were monitored using an infrared thermometer and refractometer, respectively. A validated, modified version of the HOTHAPS questionnaire was utilised to collect self-reported health symptoms. A follow-up was undertaken with the mothers to collect information regarding birth outcomes and APOs, such as spontaneous abortions, stillbirths, Preterm Birth (PTB), birth abnormalities, and Low Birth Weight (LBW). Major findings of the study: According to the findings of our study, ambient temperatures (mean WBGT°C) were substantially higher (>28°C) for approximately 46% of women performing moderate daily life activities. 82% versus 43% of these women experienced dehydration and heat-related complaints. 34% of women had USG >1.020, which is symptomatic of dehydration. APOs, which include spontaneous abortions, were prevalent at 2.2%, stillbirth/preterm birth/birth abnormalities were prevalent at 2.2%, and low birth weight was prevalent at 16.3%. With exposures to WBGT>28°C, the incidence of miscarriage or unexpected abortion rose by approximately 2.7 times (95% CI: 1.1-6.9). In addition, higher WBGT exposures were associated with a 1.4-fold increased risk of unfavorable birth outcomes (95% Confidence Interval [CI]: 1.02-1.09). The risk of spontaneous abortions was 2.8 times higher among women who conceived during the hotter months (February – September) compared to those women who conceived in the cooler months (October – January) (95% CI: 1.04-7.4). Positive relationships between ambient heat and APOs found in this study necessitate further exploration into the underlying factors for extensive cohort studies to generate information to enable the formulation of policies that can effectively protect these women against excessive heat stress for enhanced maternal and fetal health.Keywords: heat exposures, community, pregnant women, physiological strain, adverse outcome, interventions
Procedia PDF Downloads 84683 Implication of Woman’s Status on Child Health in India
Authors: Rakesh Mishra
Abstract:
India’s Demography has always amazed the world because of its unprecedented outcomes in the presence of multifaceted socioeconomic and geographical characteristics. Being the first one to implement family panning in 1952, it occupies 2nd largest population of the world, with some of its state like Uttar Pradesh contributing 5th largest population to the world population surpassing Brazil. Being the one with higher in number it is more prone to the demographic disparity persisting into its territories brought upon by the inequalities in availability, accessibility and attainability of socioeconomic and various other resources. Fifth goal of Millennium Development Goal emphasis to improve maternal and child health across the world as Children’s development is very important for the overall development of society and the best way to develop national human resources is to take care of children. The target is to reduce the infant deaths by three quarters between 1990 and 2015. Child health status depends on the care and delivery by trained personnel, particularly through institutional facilities which is further associated with the status of the mother. However, delivery in institutional facilities and delivery by skilled personnel are rising slowly in India. The main objective of the present study is to measure the child health status on based on the educational and occupational background of the women in India. Study indicates that women education plays a very crucial role in deciding the health of the new born care and access to family planning, but the women autonomy indicates to have mixed results in different states of India. It is observed that rural women are 1.61 times more likely to exclusive breastfed their children compared to urban women. With respect to Hindu category, women belonging to other religious community were 21 percent less likely to exclusive breastfed their child. Taking scheduled caste as reference category, the odds of exclusive breastfeeding is found to be decreasing in comparison to other castes, and it is found to be significant among general category. Women of high education status have higher odds of using family planning methods in most of the southern states of India. By and large, girls and boys are about equally undernourished. Under nutrition is generally lower for first births than for subsequent births and consistently increases with increasing birth order for all measures of nutritional status. It is to be noted that at age 12-23 months, when many children are being weaned from breast milk, 30 percent of children are severely stunted and around 21 percent are severely underweight. So, this paper presents the evidence on the patterns of prevailing child health status in India and its states with reference to the mother socioeconomics and biological characteristics and examines trends in these, and discusses plausible explanations.Keywords: immunization, exclusive breastfeeding, under five mortality, binary logistic regression, ordinal regression and life table
Procedia PDF Downloads 265682 Religiosity and Involvement in Purchasing Convenience Foods: Using Two-Step Cluster Analysis to Identify Heterogenous Muslim Consumers in the UK
Authors: Aisha Ijaz
Abstract:
The paper focuses on the impact of Muslim religiosity on convenience food purchases and involvement experienced in a non-Muslim culture. There is a scarcity of research on the purchasing patterns of Muslim diaspora communities residing in risk societies, particularly in contexts where there is an increasing inclination toward industrialized food items alongside a renewed interest in the concept of natural foods. The United Kingdom serves as an appropriate setting for this study due to the increasing Muslim population in the country, paralleled by the expanding Halal Food Market. A multi-dimensional framework is proposed, testing for five forms of involvement, specifically Purchase Decision Involvement, Product Involvement, Behavioural Involvement, Intrinsic Risk and Extrinsic Risk. Quantitative cross-sectional consumer data were collected through a face-to-face survey contact method with 141 Muslims during the summer of 2020 in Liverpool located in the Northwest of England. proportion formula was utilitsed, and the population of interest was stratified by gender and age before recruitment took place through local mosques and community centers. Six input variables were used (intrinsic religiosity and involvement dimensions), dividing the sample into 4 clusters using the Two-Step Cluster Analysis procedure in SPSS. Nuanced variances were observed in the type of involvement experienced by religiosity group, which influences behaviour when purchasing convenience food. Four distinct market segments were identified: highly religious ego-involving (39.7%), less religious active (26.2%), highly religious unaware (16.3%), less religious concerned (17.7%). These segments differ significantly with respects to their involvement, behavioural variables (place of purchase and information sources used), socio-cultural (acculturation and social class), and individual characteristics. Choosing the appropriate convenience food is centrally related to the value system of highly religious ego-involving first-generation Muslims, which explains their preference for shopping at ethnic food stores. Less religious active consumers are older and highly alert in information processing to make the optimal food choice, relying heavily on product label sources. Highly religious unaware Muslims are less dietary acculturated to the UK diet and tend to rely on digital and expert advice sources. The less-religious concerned segment, who are typified by younger age and third generation, are engaged with the purchase process because they are worried about making unsuitable food choices. Research implications are outlined and potential avenues for further explorations are identified.Keywords: consumer behaviour, consumption, convenience food, religion, muslims, UK
Procedia PDF Downloads 56681 ‘Only Amharic or Leave Quick!’: Linguistic Genocide in the Western Tigray Region of Ethiopia
Authors: Merih Welay Welesilassie
Abstract:
Language is a potent instrument that does not only serve the purpose of communication but also plays a pivotal role in shaping our cultural practices and identities. The right to choose one's language is a fundamental human right that helps to safeguard the integrity of both personal and communal identities. Language holds immense significance in Ethiopia, a nation with a diverse linguistic landscape that extends beyond mere communication to delineate administrative boundaries. Consequently, depriving Ethiopians of their linguistic rights represents a multifaceted punishment, more complex than food embargoes. In the aftermath of the civil war that shook Ethiopia in November 2020, displacing millions and resulting in the loss of hundreds of thousands of lives, concerns have been raised about the preservation of the indigenous Tigrayan language and culture. This is particularly true following the annexation of western Tigray into the Amhara region and the implementation of an Amharic-only language and culture education policy. This scholarly inquiry explores the intricacies surrounding the Amhara regional state's prohibition of Tigrayans' indigenous language and culture and the subsequent adoption of a monolingual and monocultural Amhara language and culture in western Tigray. The study adopts the linguistic genocide conceptual framework as an analytical tool to gain a deeper insight into the factors that contributed to and facilitated this significant linguistic and cultural shift. The research was conducted by interviewing ten teachers selected through a snowball sampling. Additionally, document analysis was performed to support the findings. The findings revealed that the push for linguistic and cultural assimilation was driven by various political and economic factors and the desire to promote a single language and culture policy. This process, often referred to as ‘Amharanization,’ aimed to homogenize the culture and language of the society. The Amhara authorities have enacted several measures in pursuit of their objectives, including the outlawing of the Tigrigna language, punishment for speaking Tigrigna, imposition of the Amhara language and culture, mandatory relocation, and even committing heinous acts that have inflicted immense physical and emotional suffering upon members of the Tigrayan community. Upon conducting a comprehensive analysis of the contextual factors, actions, intentions, and consequences, it has been posited that there may be instances of linguistic genocide taking place in the Western Tigray region. The present study sheds light on the severe consequences that could arise because of implementing monolingual and monocultural policies in multilingual areas. Through thoroughly scrutinizing the implications of such policies, this study provides insightful recommendations and directions for future research in this critical area.Keywords: linguistic genocide, linguistic human right, mother tongue, Western Tigray
Procedia PDF Downloads 65680 Structure Conduct and Performance of Rice Milling Industry in Sri Lanka
Authors: W. A. Nalaka Wijesooriya
Abstract:
The increasing paddy production, stabilization of domestic rice consumption and the increasing dynamism of rice processing and domestic markets call for a rethinking of the general direction of the rice milling industry in Sri Lanka. The main purpose of the study was to explore levels of concentration in rice milling industry in Polonnaruwa and Hambanthota which are the major hubs of the country for rice milling. Concentration indices reveal that the rice milling industry in Polonnaruwa operates weak oligopsony and is highly competitive in Hambanthota. According to the actual quantity of paddy milling per day, 47 % is less than 8Mt/Day, while 34 % is 8-20 Mt/day, and the rest (19%) is greater than 20 Mt/day. In Hambanthota, nearly 50% of the mills belong to the range of 8-20 Mt/day. Lack of experience of the milling industry, poor knowledge on milling technology, lack of capital and finding an output market are the major entry barriers to the industry. Major problems faced by all the rice millers are the lack of a uniform electricity supply and low quality paddy. Many of the millers emphasized that the rice ceiling price is a constraint to produce quality rice. More than 80% of the millers in Polonnaruwa which is the major parboiling rice producing area have mechanical dryers. Nearly 22% millers have modern machineries like color sorters, water jet polishers. Major paddy purchasing method of large scale millers in Polonnaruwa is through brokers. In Hambanthota major channel is miller purchasing from paddy farmers. Millers in both districts have major rice selling markets in Colombo and suburbs. Huge variation can be observed in the amount of pledge (for paddy storage) loans. There is a strong relationship among the storage ability, credit affordability and the scale of operation of rice millers. The inter annual price fluctuation ranged 30%-35%. Analysis of market margins by using series of secondary data shows that farmers’ share on rice consumer price is stable or slightly increases in both districts. In Hambanthota a greater share goes to the farmer. Only four mills which have obtained the Good Manufacturing Practices (GMP) certification from Sri Lanka Standards Institution can be found. All those millers are small quantity rice exporters. Priority should be given for the Small and medium scale millers in distribution of storage paddy of PMB during the off season. The industry needs a proper rice grading system, and it is recommended to introduce a ceiling price based on graded rice according to the standards. Both husk and rice bran were underutilized. Encouraging investment for establishing rice oil manufacturing plant in Polonnaruwa area is highly recommended. The current taxation procedure needs to be restructured in order to ensure the sustainability of the industry.Keywords: conduct, performance, structure (SCP), rice millers
Procedia PDF Downloads 328679 Rediscovering English for Academic Purposes in the Context of the UN’s Sustainable Developmental Goals
Authors: Sally Abu Sabaa, Lindsey Gutt
Abstract:
In an attempt to use education as a way of raising a socially responsible and engaged global citizen, the YU-Bridge program, the largest and fastest pathway program of its kind in North America, has embarked on the journey of integrating general themes from the UN’s sustainable developmental goals (SDGs) in its English for Academic Purposes (EAP) curriculum. The purpose of this initiative was to redefine the general philosophy of education in the middle of a pandemic and align with York University’s University Academic Plan that was released in summer 2020 framed around the SDGs. The YUB program attracts international students from all over the world but mainly from China, and its goal is to enable students to achieve the minimum language requirement to join their undergraduate courses at York University. However, along with measuring outcomes, objectives, and the students’ GPA, instructors and academics are always seeking innovation of the YUB curriculum to adapt to the ever growing challenges of academics in the university context, in order to focus more on subject matter that students will be exposed to in their undergraduate studies. However, with the sudden change that has happened globally with the advance of the COVID-19 pandemic, and other natural disasters like the increase in forest fires and floods, rethinking the philosophy and goal of education was a must. Accordingly, the SDGs became the solid pillars upon which we, academics and administrators of the program, could build a new curriculum and shift our perspective from simply ESL education to education with moral and ethical goals. The preliminary implementation of this initiative was supported by an institutional-wide consultation with EAP instructors who have diverse experiences, disciplines, and interests. Along with brainstorming sessions and mini-pilot projects preceding the integration of the SDGs in the YUB-EAP curriculum, those meetings led to creating a general outline of a curriculum and an assessment framework that has the SDGs at its core with the medium of ESL used for language instruction. Accordingly, a community of knowledge exchange was spontaneously created and facilitated by instructors. This has led to knowledge, resources, and teaching pedagogies being shared and examined further. In addition, experiences and reactions of students are being shared, leading to constructive discussions about opportunities and challenges with the integration of the SDGs. The discussions have branched out to discussions about cultural and political barriers along with a thirst for knowledge and engagement, which has resulted in increased engagement not only on the part of the students but the instructors as well. Later in the program, two surveys will be conducted: one for the students and one for the instructors to measure the level of engagement of each in this initiative as well as to elicit suggestions for further development. This paper will describe this fundamental step into using ESL methodology as a mode of disseminating essential ethical and socially correct knowledge for all learners in the 21st Century, the students’ reactions, and the teachers’ involvement and reflections.Keywords: EAP, curriculum, education, global citizen
Procedia PDF Downloads 184678 Listening to Voices: A Meaning-Focused Framework for Supporting People with Auditory Verbal Hallucinations
Authors: Amar Ghelani
Abstract:
People with auditory verbal hallucinations (AVH) who seek support from mental health services commonly report feeling unheard and invalidated in their interactions with social workers and psychiatric professionals. Current mental health training and clinical approaches have proven to be inadequate in addressing the complex nature of voice hearing. Childhood trauma is a key factor in the development of AVH and can render people more vulnerable to hearing both supportive and/or disturbing voices. Lived experiences of racism, poverty, and immigration are also associated with development of what is broadly classified as psychosis. Despite evidence affirming the influence of environmental factors on voice hearing, the Western biomedical system typically conceptualizes this experience as a symptom of genetically-based mental illnesses which requires diagnosis and treatment. Overemphasis on psychiatric medications, referrals, and directive approaches to people’s problems has shifted clinical interventions away from assessing and addressing problems directly related to AVH. The Maastricht approach offers voice hearers and mental health workers an alternative and respectful starting point for understanding and coping with voices. The approach was developed by voice hearers in partnership with mental health professionals and entails an innovative method to assess and create meaning from voice hearing and related life stressors. The objectives of the approach are to help people who hear voices: (1) understand the problems and/or people the voices may represent in their history, and (2) cope with distress and find solutions to related problems. The Maastricht approach has also been found to help voice hearers integrate emotional conflicts, reduce avoidance or fear associated with AVH, improve therapeutic relationships, and increase a sense of control over internal experiences. The proposed oral presentation will be guided by a recovery-oriented theoretical framework which suggests healing from psychological wounds occurs through social connections and community support systems. The presentation will start with a brainstorming exercise to identify participants pre-existing knowledge of the subject matter. This will lead into a literature review on the relations between trauma, intersectionality, and AVH. An overview of the Maastricht approach and review of research related to its therapeutic risks and benefits will follow. Participants will learn trauma-informed coping skills and questions which can help voice hearers make meaning from their experiences. The presentation will conclude with a review of resources and learning opportunities where participants can expand their knowledge of the Hearing Voices Movement and Maastricht approach.Keywords: Maastricht interview, recovery, therapeutic assessment, voice hearing
Procedia PDF Downloads 114677 Ex-vivo Bio-distribution Studies of a Potential Lung Perfusion Agent
Authors: Shabnam Sarwar, Franck Lacoeuille, Nadia Withofs, Roland Hustinx
Abstract:
After the development of a potential surrogate of MAA, and its successful application for the diagnosis of pulmonary embolism in artificially embolized rats’ lungs, this microparticulate system were radiolabelled with gallium-68 to synthesize 68Ga-SBMP with high radiochemical purity >99%. As a prerequisite step of clinical trials, 68Ga- labelled starch based microparticles (SBMP) were analysed for their in-vivo behavior in small animals. The purpose of the presented work includes the ex-vivo biodistribution studies of 68Ga-SBMP in order to assess the activity uptake in target organs with respect to time, excretion pathways of the radiopharmaceutical, %ID/g in major organs, T/NT ratios, in-vivo stability of the radiotracer and subsequently the microparticles in the target organs. Radiolabelling of starch based microparticles was performed by incubating it with 68Ga generator eluate (430±26 MBq) at room temperature and pressure without using any harsh reaction condition. For Ex-vivo biodistribution studies healthy White Wistar rats weighing between 345-460 g were injected intravenously 68Ga-SBMP 20±8 MBq, containing about 2,00,000-6,00,000 SBMP particles in a volume of 700µL. The rats were euthanized at predefined time intervals (5min, 30min, 60min and 120min) and their organ parts were cut, washed, and put in the pre-weighed tubes and measured for radioactivity counts through automatic Gamma counter. The 68Ga-SBMP produced >99% RCP just after 10-20 min incubation through a simple and robust procedure. Biodistribution of 68Ga-SBMP showed that initially just after 5 min post injection major uptake was observed in the lungs following by blood, heart, liver, kidneys, bladder, urine, spleen, stomach, small intestine, colon, skin and skeleton, thymus and at last the smallest activity was found in brain. Radioactivity counts stayed stable in lungs with gradual decrease with the passage of time, and after 2h post injection, almost half of the activity were seen in lungs. This is a sufficient time to perform PET/CT lungs scanning in humans while activity in the liver, spleen, gut and urinary system decreased with time. The results showed that urinary system is the excretion pathways instead of hepatobiliary excretion. There was a high value of T/NT ratios which suggest fine tune images for PET/CT lung perfusion studies henceforth further pre-clinical studies and then clinical trials should be planned in order to utilize this potential lung perfusion agent.Keywords: starch based microparticles, gallium-68, biodistribution, target organs, excretion pathways
Procedia PDF Downloads 173676 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases
Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar
Abstract:
Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning
Procedia PDF Downloads 119675 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes
Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão
Abstract:
The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.Keywords: eddy current separation, particle size, numerical simulation, metal recovery
Procedia PDF Downloads 89674 Monitoring Soil Moisture Dynamic in Root Zone System of Argania spinosa Using Electrical Resistivity Imaging
Authors: F. Ainlhout, S. Boutaleb, M. C. Diaz-Barradas, M. Zunzunegui
Abstract:
Argania spinosa is an endemic tree of the southwest of Morocco, occupying 828,000 Ha, distributed mainly between Mediterranean vegetation and the desert. This tree can grow in extremely arid regions in Morocco, where annual rainfall ranges between 100-300 mm where no other tree species can live. It has been designated as a UNESCO Biosphere reserve since 1998. Argania tree is of great importance in human and animal feeding of rural population as well as for oil production, it is considered as a multi-usage tree. Admine forest located in the suburbs of Agadir city, 5 km inland, was selected to conduct this work. The aim of the study was to investigate the temporal variation in root-zone moisture dynamic in response to variation in climatic conditions and vegetation water uptake, using a geophysical technique called Electrical resistivity imaging (ERI). This technique discriminates resistive woody roots, dry and moisture soil. Time-dependent measurements (from April till July) of resistivity sections were performed along the surface transect (94 m Length) at 2 m fixed electrode spacing. Transect included eight Argan trees. The interactions between the tree and soil moisture were estimated by following the tree water status variations accompanying the soil moisture deficit. For that purpose we measured midday leaf water potential and relative water content during each sampling day, and for the eight trees. The first results showed that ERI can be used to accurately quantify the spatiotemporal distribution of root-zone moisture content and woody root. The section obtained shows three different layers: middle conductive one (moistured); a moderately resistive layer corresponding to relatively dry soil (calcareous formation with intercalation of marly strata) on top, this layer is interspersed by very resistant layer corresponding to woody roots. Below the conductive layer, we find the moderately resistive layer. We note that throughout the experiment, there was a continuous decrease in soil moisture at the different layers. With the ERI, we can clearly estimate the depth of the woody roots, which does not exceed 4 meters. In previous work on the same species, analyzing the δ18O in water of xylem and in the range of possible water sources, we argued that rain is the main water source in winter and spring, but not in summer, trees are not exploiting deep water from the aquifer as the popular assessment, instead of this they are using soil water at few meter depth. The results of the present work confirm the idea that the roots of Argania spinosa are not growing very deep.Keywords: Argania spinosa, electrical resistivity imaging, root system, soil moisture
Procedia PDF Downloads 328673 Use of PACER Application as Physical Activity Assessment Tool: Results of a Reliability and Validity Study
Authors: Carine Platat, Fatima Qshadi, Ghofran Kayed, Nour Hussein, Amjad Jarrar, Habiba Ali
Abstract:
Nowadays, smartphones are very popular. They are offering a variety of easy-to-use and free applications among which step counters and fitness tests. The number of users is huge making of such applications a potentially efficient new strategy to encourage people to become more active. Nonetheless, data on their reliability and validity are very scarce and when available, they are often negative and contradictory. Besides, weight status, which is likely to introduce a bias in the physical activity assessment, was not often considered. Hence, the use of these applications as motivational tool, assessment tool and in research is questionable. PACER is one of the free step counters application. Even though it is one of the best rated free application by users, it has never been tested for reliability and validity. Prior any use of PACER, this remains to be investigated. The objective of this work is to investigate the reliability and validity of the smartphone application PACER in measuring the number of steps and in assessing the cardiorespiratory fitness by the 6 minutes walking test. 20 overweight or obese students (10 male and 10 female) were recruited at the United Arab Emirate University, aged between 18 and 25 years old. Reliability and validity were tested in real life conditions and in controlled conditions by using a treadmill. Test-retest experiments were done with PACER on 2 days separated by a week in real life conditions (24 hours each time) and in controlled conditions (30 minutes on treadmill, 3km/h). Validity was tested against the pedometer OMRON in the same conditions. During treadmill test, video was recorded and steps numbers were compared between PACER, pedometer and video. The validity of PACER in estimating the cardiorespiratory fitness (VO2max) as part of the 6 minutes walking test (6MWT) was studied against the 20m shuttle running test. Reliability was studied by calculating intraclass correlation coefficients (ICC), 95% confidence interval (95%CI) and by Bland-Altman plots. Validity was studied by calculating Spearman correlation coefficient (rho) and Bland-Altman plots. PACER reliability was good in both male and female in real life conditions (p≤10-3) but only in female in controlled conditions (p=0.01). PACER was valid against OMRON pedometer in male and female in real life conditions (rho=0.94, p≤10-3 ; rho=0.64, p=0.01, in male and female respectively). In controlled conditions, PACER was not valid against pedometer. But, PACER was valid against video in female (rho=0.72, p≤10-3). PACER was valid against the shuttle run test in male and female (rho-=0.66, p=0.01 ; rho=0.51, p=0.04) to estimate VO2max. This study provides data on the reliability and viability of PACER in overweight or obese male and female young adults. Globally, PACER was shown as reliable and valid in real life conditions in overweight or obese male and female to count steps and assess fitness. This supports the use of PACER to assess and promote physical activity in clinical follow-up and community interventions.Keywords: smartphone application, pacer, reliability, validity, steps, fitness, physical activity
Procedia PDF Downloads 452672 Deasphalting of Crude Oil by Extraction Method
Authors: A. N. Kurbanova, G. K. Sugurbekova, N. K. Akhmetov
Abstract:
The asphaltenes are heavy fraction of crude oil. Asphaltenes on oilfield is known for its ability to plug wells, surface equipment and pores of the geologic formations. The present research is devoted to the deasphalting of crude oil as the initial stage refining oil. Solvent deasphalting was conducted by extraction with organic solvents (cyclohexane, carbon tetrachloride, chloroform). Analysis of availability of metals was conducted by ICP-MS and spectral feature at deasphalting was achieved by FTIR. High contents of asphaltenes in crude oil reduce the efficiency of refining processes. Moreover, high distribution heteroatoms (e.g., S, N) were also suggested in asphaltenes cause some problems: environmental pollution, corrosion and poisoning of the catalyst. The main objective of this work is to study the effect of deasphalting process crude oil to improve its properties and improving the efficiency of recycling processes. Experiments of solvent extraction are using organic solvents held in the crude oil JSC “Pavlodar Oil Chemistry Refinery. Experimental results show that deasphalting process also leads to decrease Ni, V in the composition of the oil. One solution to the problem of cleaning oils from metals, hydrogen sulfide and mercaptan is absorption with chemical reagents directly in oil residue and production due to the fact that asphalt and resinous substance degrade operational properties of oils and reduce the effectiveness of selective refining of oils. Deasphalting of crude oil is necessary to separate the light fraction from heavy metallic asphaltenes part of crude oil. For this oil is pretreated deasphalting, because asphaltenes tend to form coke or consume large quantities of hydrogen. Removing asphaltenes leads to partly demetallization, i.e. for removal of asphaltenes V/Ni and organic compounds with heteroatoms. Intramolecular complexes are relatively well researched on the example of porphyinous complex (VO2) and nickel (Ni). As a result of studies of V/Ni by ICP MS method were determined the effect of different solvents-deasphalting – on the process of extracting metals on deasphalting stage and select the best organic solvent. Thus, as the best DAO proved cyclohexane (C6H12), which as a result of ICP MS retrieves V-51.2%, Ni-66.4%? Also in this paper presents the results of a study of physical and chemical properties and spectral characteristics of oil on FTIR with a view to establishing its hydrocarbon composition. Obtained by using IR-spectroscopy method information about the specifics of the whole oil give provisional physical, chemical characteristics. They can be useful in the consideration of issues of origin and geochemical conditions of accumulation of oil, as well as some technological challenges. Systematic analysis carried out in this study; improve our understanding of the stability mechanism of asphaltenes. The role of deasphalted crude oil fractions on the stability asphaltene is described.Keywords: asphaltenes, deasphalting, extraction, vanadium, nickel, metalloporphyrins, ICP-MS, IR spectroscopy
Procedia PDF Downloads 242671 Association between TNF-α and Its Receptor TNFRSF1B Polymorphism with Pulmonary Tuberculosis in Tomsk, Russia Federation
Authors: K. A. Gladkova, N. P. Babushkina, E. Y. Bragina
Abstract:
Purpose: Tuberculosis (TB), caused by Mycobacterium tuberculosis, is one of the major public health problems worldwide. It is clear that the immune response to M. tuberculosis infection is a relationship between inflammatory and anti-inflammatory responses in which Tumour Necrosis Factor-α (TNF-α) plays key roles as a pro-inflammatory cytokine. TNF-α involved in various cell immune responses via binding to its two types of membrane-bound receptors, TNFRSF1A and TNFRSF1B. Importantly, some variants of the TNFRSF1B gene have been considered as possible markers of host susceptibility to TB. However, the possible impact of such TNF-α and its receptor genes polymorphism on TB cases in Tomsk is missing. Thus, the purpose of our study was to investigate polymorphism of TNF-α (rs1800629) and its receptor TNFRSF1B (rs652625 and rs525891) genes in population of Tomsk and to evaluate their possible association with the development of pulmonary TB. Materials and Methods: The population distribution features of genes polymorphisms were investigated and made case-control study based on group of people from Tomsk. Human blood was collected during routine patients examination at Tomsk Regional TB Dispensary. Altogether, 234 TB-positive patients (80 women, 154 men, average age is 28 years old) and 205 health-controls (153 women, 52 men, average age is 47 years old) were investigated. DNA was extracted from blood plasma by phenol-chloroform method. Genotyping was carried out by a single-nucleotide-specific real-time PCR assay. Results: First, interpopulational comparison was carried out between healthy individuals from Tomsk and available data from the 1000 Genomes project. It was found that polymorphism rs1800629 region demonstrated that Tomsk population was significantly different from Japanese (P = 0.0007), but it was similar with the following Europeans subpopulations: Italians (P = 0.052), Finns (P = 0.124) and British (P = 0.910). Polymorphism rs525891 clear demonstrated that group from Tomsk was significantly different from population of South Africa (P = 0.019). However, rs652625 demonstrated significant differences from Asian population: Chinese (P = 0.03) and Japanese (P = 0.004). Next, we have compared healthy individuals versus patients with TB. It was detected that no association between rs1800629, rs652625 polymorphisms, and positive TB cases. Importantly, AT genotype of polymorphism rs525891 was significantly associated with resistance to TB (odds ratio (OR) = 0.61; 95% confidence interval (CI): 0.41-0.9; P < 0.05). Conclusion: To the best of our knowledge, the polymorphism of TNFRSF1B (rs525891) was associated with TB, while genotype AT is protective [OR = 0.61] in Tomsk population. In contrast, no significant correlation was detected between polymorphism TNF-α (rs1800629) and TNFRSF1B (rs652625) genes and alveolar TB cases among population of Tomsk. In conclusion, our data expands the molecular particularities associated with TB. The study was supported by the grant of the Russia for Basic Research #15-04-05852.Keywords: polymorphism, tuberculosis, TNF-α, TNFRSF1B gene
Procedia PDF Downloads 179670 Peculiarities of Snow Cover in Belarus
Authors: Aleh Meshyk, Anastasiya Vouchak
Abstract:
On the average snow covers Belarus for 75 days in the south-west and 125 days in the north-east. During the cold season snowpack often destroys due to thaws, especially at the beginning and end of winter. Over 50% of thawing days have a positive mean daily temperature, which results in complete snow melting. For instance, in December 10% of thaws occur at 4 С mean daily temperature. Stable snowpack lying for over a month forms in the north-east in the first decade of December but in the south-west in the third decade of December. The cover disappears in March: in the north-east in the last decade but in the south-west in the first decade. This research takes into account that precipitation falling during a cold season could be not only liquid and solid but also a mixed type (about 10-15 % a year). Another important feature of snow cover is its density. In Belarus, the density of freshly fallen snow ranges from 0.08-0.12 g/cm³ in the north-east to 0.12-0.17 g/cm³ in the south-west. Over time, snow settles under its weight and after melting and refreezing. Averaged annual density of snow at the end of January is 0.23-0.28 g/сm³, in February – 0.25-0.30 g/сm³, in March – 0.29-0.36 g/сm³. Sometimes it can be over 0.50 g/сm³ if the snow melts too fast. The density of melting snow saturated with water can reach 0.80 g/сm³. Average maximum of snow depth is 15-33 cm: minimum is in Brest, maximum is in Lyntupy. Maximum registered snow depth ranges within 40-72 cm. The water content in snowpack, as well as its depth and density, reaches its maximum in the second half of February – beginning of March. Spatial distribution of the amount of liquid in snow corresponds to the trend described above, i.e. it increases in the direction from south-west to north-east and on the highlands. Average annual value of maximum water content in snow ranges from 35 mm in the south-west to 80-100 mm in the north-east. The water content in snow is over 80 mm on the central Belarusian highland. In certain years it exceeds 2-3 times the average annual values. Moderate water content in snow (80-95 mm) is characteristic of western highlands. Maximum water content in snow varies over the country from 107 mm (Brest) to 207 mm (Novogrudok). Maximum water content in snow varies significantly in time (in years), which is confirmed by high variation coefficient (Cv). Maximums (0.62-0.69) are in the south and south-west of Belarus. Minimums (0.42-0.46) are in central and north-eastern Belarus where snow cover is more stable. Since 1987 most gauge stations in Belarus have observed a trend to a decrease in water content in snow. It is confirmed by the research. The biggest snow cover forms on the highlands in central and north-eastern Belarus. Novogrudok, Minsk, Volkovysk, and Sventayny highlands are a natural orographic barrier which prevents snow-bringing air masses from penetrating inside the country. The research is based on data from gauge stations in Belarus registered from 1944 to 2014.Keywords: density, depth, snow, water content in snow
Procedia PDF Downloads 161669 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 207