Search results for: methods comparison
17909 Distribution of Phospholipids, Cholesterol and Carotenoids in Two-Solvent System during Egg Yolk Oil Solvent Extraction
Authors: Aleksandrs Kovalcuks, Mara Duma
Abstract:
Egg yolk oil is a concentrated source of egg bioactive compounds, such as fat-soluble vitamins, phospholipids, cholesterol, carotenoids and others. To extract lipids and other fat-soluble nutrients from liquid egg yolk, a two-step extraction process involving polar (ethanol) and non-polar (hexane) solvents were used. This extraction technique was based on egg yolk bioactive compounds polarities, where non-polar compound was extracted into non-polar hexane, but polar in to polar alcohol/water phase. But many egg yolk bioactive compounds are not strongly polar or non-polar. Egg yolk phospholipids, cholesterol and pigments are amphipatic (have both polar and non-polar regions) and their behavior in ethanol/hexane solvent system is not clear. The aim of this study was to clarify the behavior of phospholipids, cholesterol and carotenoids during extraction of egg yolk oil with ethanol and hexane and determine the loss of these compounds in egg yolk oil. Egg yolks and egg yolk oil were analyzed for phospholipids (phosphatidylcholine (PC) and phosphatidylethanolamine (PE)), cholesterol and carotenoids (lutein, zeaxanthin, canthaxanthin and β-carotene) content using GC-FID and HPLC methods. PC and PE are polar lipids and were extracted into polar ethanol phase. Concentration of PC in ethanol was 97.89% and PE 99.81% from total egg yolk phospholipids. Due to cholesterol’s partial extraction into ethanol, cholesterol content in egg yolk oil was reduced in comparison to its total content presented in egg yolk lipids. The highest amount of lutein and zeaxanthin was concentrated in ethanol extract. The opposite situation was observed with canthaxanthin and β-carotene, which became the main pigments of egg yolk oil.Keywords: cholesterol, egg yolk oil, lutein, phospholipids, solvent extraction
Procedia PDF Downloads 50917908 Describing Professional Purchasers' Performance Applying the 'Big Five Inventory': Findings from a Survey in Austria
Authors: Volker Koch, Sigrid Swobodnik, Bernd M. Zunk
Abstract:
The success of companies on globalized markets is significantly influenced by the performance of purchasing departments and, of course, the individuals employed as professional purchasers. Nonetheless, this is generally accepted in practice, in literature as well as in empirical research, only insufficient attention was given to the assessment of this relationship between the personality of professional purchasers and their individual performance. This paper aims to describe the relationship against the background of the 'Big Five Inventory'. Based on the five dimensions of a personality (openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism) a research model was designed. The research model divides the individual performance of professional purchasers into two major dimensions: operational and strategic. The operational dimension consists of the items 'cost', 'quality delivery' and 'flexibility'; the strategic dimension comprises the positions 'innovation', 'supplier satisfaction' as wells as 'purchasing and supply management integration in the organization'. To test the research model, a survey study was performed, and an online questionnaire was sent out to purchasing professionals in Austrian companies. The data collected from 78 responses was used to test the research model applying a group comparison. The comparison points out that there is (i) an influence of the purchasers’ personality on the individual performance of professional purchasers and (ii) a link between purchasers’ personality to a high or a low individual performance of professional purchasers. The findings of this study may help human resource managers during staff recruitment processes to identify the 'right performing personality' for an operational and/or a strategic position in purchasing departments.Keywords: big five inventory, individual performance, personality, purchasing professionals
Procedia PDF Downloads 17017907 Prioritization Ranking for Managing Moisture Problems in a Building
Authors: Sai Amulya Gollapalli, Dilip A. Patel, Parth Patel K., Lukman E. Mansuri
Abstract:
Accumulation of moisture is one of the most worrisome aspects of a building. Architects and engineers tend to ignore its vitality during the designing and construction stage. Major fatalities in buildings can be caused by it. People avoid spending a lot of money on waterproofing. If the same mistake is repeated, no deep thinking is done. The quality of workmanship and construction is depleting due to negligence. It is important to do an analysis of the water maintenance issues happening in the current buildings and give a database for all the factors that are causing the defect. In this research, surveys are done with two waterproofing consultants, two client engineers, and two project managers. The survey was based on a matrix that was based on the causes of water maintenance issues. There were around 100 causes that were identified. The causes were categorized into six, namely, manpower, finance, method, management, environment, and material. In the matrices, the causes on the x-direction matched with the causes on the y-direction. 3 Likert scale was used to make a pairwise comparison between causes on each cell. Matrices were evaluated for the main categories and for each category separately. A final ranking was done by the weights achieved, and ‘cracks arriving from various construction joints’ was the highest with 0.57 relative significance, and ‘usage of the material’ was the lowest with 0.03 relative significance. Twelve defects due to water leakage were identified, and interviewees were asked to make a pairwise comparison of them, too, to understand the priorities. When the list of causes is achieved, the prioritization as per the stratification analysis is done. This will be beneficial to the consultants and contractors as they will get a primary idea of which causes to focus on.Keywords: water leakage, survey, causes, matrices, prioritization
Procedia PDF Downloads 9917906 Effect of Farsi gum (Amygdalus Scoparia Spach) in Combination with Sodium Caseinate on Textural, Stability, Sensory Characteristics and Rheological Properties of Whipped Cream
Authors: Samaneh Mashayekhi
Abstract:
Cream (whipped cream) is one of the dairy products that can be used in desserts, pastries, cakes, and ice creams. In this product, some parameters such as taste and flavor, quality stability, whipping ability, and stability of foam after whipping are very important. The objective of this study is applicable of Farsi gum and sodium caseinate in 3 biopolymer ratios (1:1, 1:2, and 2:1) and 0.15, 0.30, and 0.45 %wt. concentrations in whipped cream formulation. Sample without hydrocolloids was considered as a control. Before whipping, viscosity of all creams was increased continuously with increasing shear rate. In addition, the viscosity was increased with the increasing hydrocolloids addition (in constant shear rate). Microscopic observations showed that polydispersity of systems before whipping. Overrun of F, FC11, and FC21 samples were increased (with increasing total hydrocollid concentration 0.15 to 0.30 % wt.); then decreased this parameter with increasing to 0.45 % wt. concentration. However, mean comparison of FC12 samples overrun showed that this value was increased with increasing total hydrocolloids concentration. 0.45FC21 sample had significantly (P<0.05) highest overrun (118.44±9.11). Synersis of whipped cream samples are reduced with hydrocolloid addition. B sample had significantly (P<0.05) highest serum separation (16.66±0.80%), and 0.45FC12 had a low one (5.94±0.19%) in compered with others synersis. Mean comparison of hardness and adhesiveness of whipped cream revealed that Farsi gum addition alone and in combination with sodium caseinate increased the previous textural characteristics. Results exhibited that 0.4FG12 had significantly (P<0.05) highest hardness (267.00±18.38 g).Mean comparison of droplet size of cream sample before whipping displaced that hydrocolloid addition had no significant effect (P>0.05), and mean droplet size of the samples ranged between 1.93-2.16 µm. Generally, the mean droplet size of whipped cream increased after whipping with increasing hydrocolloid concentration (0.15-0.45 % wt.). Color parameter analysis showed that Farsi gum addition alone and in combination with sodium caseinate had no significant effect (P>0.05) on these parameters (Lightness, Redness, and Yellowness). Based on sensory evaluation results, appearance, color, flavor, and taste of whipped creams not influenced by hydrocolloids addition; but 0.45FC12 sample had higher value. Based on the above results, Farsi gum had suggested to potential application in a whipped cream formulation; however, further research need to foundingof their functionality.Keywords: whipped cream, farsi gum, sodium caseinate, overrun, droplet size, texture analysis, sensory evaluation
Procedia PDF Downloads 9817905 Modeling Local Warming Trend: An Application of Remote Sensing Technique
Authors: Khan R. Rahaman, Quazi K. Hassan
Abstract:
Global changes in climate, environment, economies, populations, governments, institutions, and cultures converge in localities. Changes at a local scale, in turn, contribute to global changes as well as being affected by them. Our hypothesis is built on a consideration that temperature does vary at local level (i.e., termed as local warming) in comparison to the predicted models at the regional and/or global scale. To date, the bulk of the research relating local places to global climate change has been top-down, from the global toward the local, concentrating on methods of impact analysis that use as a starting point climate change scenarios derived from global models, even though these have little regional or local specificity. Thus, our focus is to understand such trends over the southern Alberta, which will enable decision makers, scientists, researcher community, and local people to adapt their policies based on local level temperature variations and to act accordingly. Specific objectives in this study are: (i) to understand the local warming (temperature in particular) trend in context of temperature normal during the period 1961-2010 at point locations using meteorological data; (ii) to validate the data by using specific yearly data, and (iii) to delineate the spatial extent of the local warming trends and understanding influential factors to adopt situation by local governments. Existing data has brought the evidence of such changes and future research emphasis will be given to validate this hypothesis based on remotely sensed data (i.e. MODIS product by NASA).Keywords: local warming, climate change, urban area, Alberta, Canada
Procedia PDF Downloads 34617904 Comparison of Corneal Curvature Measurements Conducted with Tomey AO-2000® and the Current Standard Biometer IOL Master®
Authors: Mohd Radzi Hilmi, Khairidzan Mohd Kamal, Che Azemin Mohd Zulfaezal, Ariffin Azrin Esmady
Abstract:
Purpose: Corneal curvature (CC) is an important anterior segment parameter. This study compared CC measurements conducted with two optical devices in phakic eyes. Methods: Sixty phakic eyes of 30 patients were enrolled in this study. CC was measured three times with the optical biometer and topography-keratometer Tomey AO-2000 (Tomey Corporation, Nagoya, Japan), then with the standard partial optical coherence interferometry (PCI) IOL Master (Carl Zeiss Meditec, Dublin, CA) and data were statistically analysed. Results: The measurements resulted in a mean CC of 43.86 ± 1.57 D with Tomey AO-2000 and 43.84 ± 1.55 D with IOL Master. Distribution of data is normal, and no significance difference in CC values was detected (P = 0.952) between the two devices. Correlation between CC measurements was highly significant (r = 0. 99; P < 0.0001). The mean difference of CC values between devices was 0.017 D and 95% limit of agreement was -0.088 to 0.12. Duration taken for measurements with the standard biometer IOL Master was longer (55.17 ± 2.24 seconds) than with Tomey AO-2000 (39.88 ± 2.38 seconds) in automatic mode. Duration of manual measurement with Tomey AO-2000 in manual mode was the shortest (28.57 ± 2.71 seconds). Conclusion: In phakic eyes, CC measured with Tomey AO-2000 and IOL Master showed similar values, and high correlation was observed between these two devices. This shows that both devices can be used interchangeably. Tomey AO-2000 is better in terms of faster to operate and has its own topography systems.Keywords: corneal topography, corneal curvature, IOL Master, Tomey AO2000
Procedia PDF Downloads 38717903 Evaluating Models Through Feature Selection Methods Using Data Driven Approach
Authors: Shital Patil, Surendra Bhosale
Abstract:
Cardiac diseases are the leading causes of mortality and morbidity in the world, from recent few decades accounting for a large number of deaths have emerged as the most life-threatening disorder globally. Machine learning and Artificial intelligence have been playing key role in predicting the heart diseases. A relevant set of feature can be very helpful in predicting the disease accurately. In this study, we proposed a comparative analysis of 4 different features selection methods and evaluated their performance with both raw (Unbalanced dataset) and sampled (Balanced) dataset. The publicly available Z-Alizadeh Sani dataset have been used for this study. Four feature selection methods: Data Analysis, minimum Redundancy maximum Relevance (mRMR), Recursive Feature Elimination (RFE), Chi-squared are used in this study. These methods are tested with 8 different classification models to get the best accuracy possible. Using balanced and unbalanced dataset, the study shows promising results in terms of various performance metrics in accurately predicting heart disease. Experimental results obtained by the proposed method with the raw data obtains maximum AUC of 100%, maximum F1 score of 94%, maximum Recall of 98%, maximum Precision of 93%. While with the balanced dataset obtained results are, maximum AUC of 100%, F1-score 95%, maximum Recall of 95%, maximum Precision of 97%.Keywords: cardio vascular diseases, machine learning, feature selection, SMOTE
Procedia PDF Downloads 11817902 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques
Authors: Tomas Trainys, Algimantas Venckauskas
Abstract:
Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.
Procedia PDF Downloads 15017901 Comparison of the Factor of Safety and Strength Reduction Factor Values from Slope Stability Analysis of a Large Open Pit
Authors: James Killian, Sarah Cox
Abstract:
The use of stability criteria within geotechnical engineering is the way the results of analyses are conveyed, and sensitivities and risk assessments are performed. Historically, the primary stability criteria for slope design has been the Factor of Safety (FOS) coming from a limit calculation. Increasingly, the value derived from Strength Reduction Factor (SRF) analysis is being used as the criteria for stability analysis. The purpose of this work was to study in detail the relationship between SRF values produced from a numerical modeling technique and the traditional FOS values produced from Limit Equilibrium (LEM) analyses. This study utilized a model of a 3000-foot-high slope with a 45-degree slope angle, assuming a perfectly plastic mohr-coulomb constitutive model with high cohesion and friction angle values typical of a large hard rock mine slope. A number of variables affecting the values of the SRF in a numerical analysis were tested, including zone size, in-situ stress, tensile strength, and dilation angle. This paper demonstrates that in most cases, SRF values are lower than the corresponding LEM FOS values. Modeled zone size has the greatest effect on the estimated SRF value, which can vary as much as 15% to the downside compared to FOS. For consistency when using SRF as a stability criteria, the authors suggest that numerical model zone sizes should not be constructed to be smaller than about 1% of the overall problem slope height and shouldn’t be greater than 2%. Future work could include investigations of the effect of anisotropic strength assumptions or advanced constitutive models.Keywords: FOS, SRF, LEM, comparison
Procedia PDF Downloads 30817900 Energy-Saving Methods and Principles of Energy-Efficient Concept Design in the Northern Hemisphere
Authors: Yulia A. Kononova, Znang X. Ning
Abstract:
Nowadays, architectural development is getting faster and faster. Nevertheless, modern architecture often does not meet all the points, which could help our planet to get better. As we know, people are spending an enormous amount of energy every day of their lives. Because of the uncontrolled energy usage, people have to increase energy production. As energy production process demands a lot of fuel sources, it courses a lot of problems such as climate changes, environment pollution, animals’ distinction, and lack of energy sources also. Nevertheless, nowadays humanity has all the opportunities to change this situation. Architecture is one of the most popular fields where it is possible to apply new methods of saving energy or even creating it. Nowadays we have kinds of buildings, which can meet new willing. One of them is energy effective buildings, which can save or even produce energy, combining several energy-saving principles. The main aim of this research is to provide information that helps to apply energy-saving methods while designing an environment-friendly building. The research methodology requires gathering relevant information from literature, building guidelines documents and previous research works in order to analyze it and sum up into a material that can be applied to energy-efficient building design. To mark results it should be noted that the usage of all the energy-saving methods applied to a design project of building results in ultra-low energy buildings that require little energy for space heating or cooling. As a conclusion it can be stated that developing methods of passive house design can decrease the need of energy production, which is an important issue that has to be solved in order to save planet sources and decrease environment pollution.Keywords: accumulation, energy-efficient building, storage, superinsulation, passive house
Procedia PDF Downloads 26217899 Improving Detection of Illegitimate Scores and Assessment in Most Advantageous Tenders
Authors: Hao-Hsi Tseng, Hsin-Yun Lee
Abstract:
The Most Advantageous Tender (MAT) has been criticized for its susceptibility to dictatorial situations and for its processing of same score, same rank issues. This study applies the four criteria from Arrow's Impossibility Theorem to construct a mechanism for revealing illegitimate scores in scoring methods. While commonly be used to improve on problems resulting from extreme scores, ranking methods hide significant defects, adversely affecting selection fairness. To address these shortcomings, this study relies mainly on the overall evaluated score method, using standardized scores plus normal cumulative distribution function conversion to calculate the evaluation of vender preference. This allows for free score evaluations, which reduces the influence of dictatorial behavior and avoiding same score, same rank issues. Large-scale simulations confirm that this method outperforms currently used methods using the Impossibility Theorem.Keywords: Arrow’s impossibility theorem, cumulative normal distribution function, most advantageous tender, scoring method
Procedia PDF Downloads 46417898 Virtual Reality Learning Environment in Embryology Education
Authors: Salsabeel F. M. Alfalah, Jannat F. Falah, Nadia Muhaidat, Amjad Hudaib, Diana Koshebye, Sawsan AlHourani
Abstract:
Educational technology is changing the way how students engage and interact with learning materials. This improved the learning process amongst various subjects. Virtual Reality (VR) applications are considered one of the evolving methods that have contributed to enhancing medical education. This paper utilizes VR to provide a solution to improve the delivery of the subject of Embryology to medical students, and facilitate the teaching process by providing a useful aid to lecturers, whilst proving the effectiveness of this new technology in this particular area. After evaluating the current teaching methods and identifying students ‘needs, a VR system was designed that demonstrates in an interactive fashion the development of the human embryo from fertilization to week ten of intrauterine development. This system aims to overcome some of the problems faced by the students’ in the current educational methods, and to increase the efficacy of the learning process.Keywords: virtual reality, student assessment, medical education, 3D, embryology
Procedia PDF Downloads 19117897 Evaluation of Ensemble Classifiers for Intrusion Detection
Authors: M. Govindarajan
Abstract:
One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection.Keywords: data mining, ensemble, radial basis function, support vector machine, accuracy
Procedia PDF Downloads 24817896 Using the Bootstrap for Problems Statistics
Authors: Brahim Boukabcha, Amar Rebbouh
Abstract:
The bootstrap method based on the idea of exploiting all the information provided by the initial sample, allows us to study the properties of estimators. In this article we will present a theoretical study on the different methods of bootstrapping and using the technique of re-sampling in statistics inference to calculate the standard error of means of an estimator and determining a confidence interval for an estimated parameter. We apply these methods tested in the regression models and Pareto model, giving the best approximations.Keywords: bootstrap, error standard, bias, jackknife, mean, median, variance, confidence interval, regression models
Procedia PDF Downloads 38117895 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 14917894 Characterisation, Extraction of Secondary Metabolite from Perilla frutescens for Therapeutic Additives: A Phytogenic Approach
Authors: B. M. Vishal, Monamie Basu, Gopinath M., Rose Havilah Pulla
Abstract:
Though there are several methods of synthesizing silver nano particles, Green synthesis always has its own dignity. Ranging from the cost-effectiveness to the ease of synthesis, the process is simplified in the best possible way and is one of the most explored topics. This study of extracting secondary metabolites from Perilla frutescens and using them for therapeutic additives has its own significance. Unlike the other researches that have been done so far, this study aims to synthesize Silver nano particles from Perilla frutescens using three available forms of the plant: leaves, seed, and commercial leaf extract powder. Perilla frutescens, commonly known as 'Beefsteak Plant', is a perennial plant and belongs to the mint family. The plant has two varieties classed within itself. They are frutescens crispa and frutescens frutescens. The species, frutescens crispa (commonly known as 'Shisho' in Japanese), is generally used for edible purposes. Its leaves occur in two forms, varying on the colors. It is found in two different colors of red with purple streaks and green with crinkly pattern on it. This species is aromatic due to the presence of two major compounds: polyphenols and perillaldehyde. The red (purple streak) variety of this plant is due to the presence of a pigment, Perilla anthocyanin. The species, frutescens frutescens (commonly known as 'Egoma' in Japanese), is the main source for perilla oil. This species is also aromatic, but in this case, the major compound which gives the aroma is Perilla ketone or egoma ketone. Shisho grows short as compared with Wild Sesame and both produce seeds. The seeds of Wild Sesame are large and soft whereas that of Shisho is small and hard. The seeds have a large proportion of lipids, ranging about 38-45 percent. Excluding those, the seeds have a large quantity of Omega-3 fatty acids, linoleic acid, and an Omega-6 fatty acid. Other than these, Perilla leaf extract has gold and silver nano particles in it. The yield comparison in all the cases have been done, and the process’ optimal conditions were modified, keeping in mind the efficiencies. The characterization of secondary metabolites includes GC-MS and FTIR which can be used to identify the components of purpose that actually helps in synthesizing silver nano particles. The analysis of silver was done through a series of characterization tests that include XRD, UV-Vis, EDAX, and SEM. After the synthesis, for being used as therapeutic additives, the toxin analysis was done, and the results were tabulated. The synthesis of silver nano particles was done in a series of multiple cycles of extraction from leaves, seeds and commercially purchased leaf extract. The yield and efficiency comparison were done to bring out the best and the cheapest possible way of synthesizing silver nano particles using Perilla frutescens. The synthesized nano particles can be used in therapeutic drugs, which has a wide range of application from burn treatment to cancer treatment. This will, in turn, replace the traditional processes of synthesizing nano particles, as this method will prove effective in terms of cost and the environmental implications.Keywords: nanoparticles, green synthesis, Perilla frutescens, characterisation, toxin analysis
Procedia PDF Downloads 23317893 Healing to Be a Man or Living in the Truth: Comparison on the Concept of Healing between Foucault and Chan
Authors: Jing Li Hong
Abstract:
This study compared Michel Foucault’s thoughts and the Chan School’s thoughts on the idea of healing. Healing is not an unfamiliar idea in Buddhist thoughts. The paired concepts of illness and medicine are often used as a metaphor to describe the relationship between people and truth. Foucault investigated the topic of care of self in his later studies and dedicated a large portion of his final semester course at the Collège de France in 1984 to discuss the meaning of Socrates’s offering of a sacrifice to the god of medicine in Phaedo. Foucault indicated a key preposition in ancient philosophy, namely healing. His idea of healing also addressed the relationship between subject and truth. From this relationship, Foucault unraveled his novel study on truth, namely the technologies of the self, with an emphasis on the care of self. Whereas numerous philosophers ask obvious questions such as ‘what is truth’ and ‘how to learn about truth,’ Foucault proposed distinct questions such as ‘what is our relationship to truth’ and ‘how does our relationship with truth turn us into who we are now?’ Thus, healing in both Buddhist and Foucault’s thoughts is related to the relationship between being and truth. This study first reviews Buddhist and Foucault’s ideas of healing to explicate what is illness and what is medicine. Because Buddhist thoughts cover an extensive scope, this study focuses on the thoughts of the Chan School. The second part is a discussion on medicine (treatment), specifically what is used as the medicine for the illness in both thoughts, and how can this medicine treat the illness. This part includes a description and comparison of the use of concepts of negation in these two thought groups. Finally, the subjects that practice the technologies of the self in both groups are compared from the idea of care of self; in other words, the differences between the subjects formed by the different relationships between being and truth are analyzed.Keywords: Chan, heterogeneous, living style, language of paradox, Michel Foucault, negation, parrhesia, the care of self
Procedia PDF Downloads 18217892 Study of Objectivity, Reliability and Validity of Pedagogical Diagnostic Parameters Introduced in the Framework of a Specific Research
Authors: Emiliya Tsankova, Genoveva Zlateva, Violeta Kostadinova
Abstract:
The challenges modern education faces undoubtedly require reforms and innovations aimed at the reconceptualization of existing educational strategies, the introduction of new concepts and novel techniques and technologies related to the recasting of the aims of education and the remodeling of the content and methodology of education which would guarantee the streamlining of our education with basic European values. Aim: The aim of the current research is the development of a didactic technology for the assessment of the applicability and efficacy of game techniques in pedagogic practice calibrated to specific content and the age specificity of learners, as well as for evaluating the efficacy of such approaches for the facilitation of the acquisition of biological knowledge at a higher theoretical level. Results: In this research, we examine the objectivity, reliability and validity of two newly introduced diagnostic parameters for assessing the durability of the acquired knowledge. A pedagogic experiment has been carried out targeting the verification of the hypothesis that the introduction of game techniques in biological education leads to an increase in the quantity, quality and durability of the knowledge acquired by students. For the purposes of monitoring the effect of the application of the pedagogical technique employing game methodology on the durability of the acquired knowledge a test-base examination has been applied to students from a control group (CG) and students form an experimental group on the same content after a six-month period. The analysis is based on: 1.A study of the statistical significance of the differences of the tests for the CG and the EG, applied after a six-month period, which however is not indicative of the presence or absence of a marked effect from the applied pedagogic technique in cases when the entry levels of the two groups are different. 2.For a more reliable comparison, independently from the entry level of each group, another “indicator of efficacy of game techniques for the durability of knowledge” which has been used for the assessment of the achievement results and durability of this methodology of education. The monitoring of the studied parameters in their dynamic unfolding in different age groups of learners unquestionably reveals a positive effect of the introduction of game techniques in education in respect of durability and permanence of acquired knowledge. Methods: In the current research the following battery of methods and techniques of research for diagnostics has been employed: theoretical analysis and synthesis; an actual pedagogical experiment; questionnaire; didactic testing and mathematical and statistical methods. The data obtained have been used for the qualitative and quantitative of the results which reflect the efficacy of the applied methodology. Conclusion: The didactic model of the parameters researched in the framework of a specific study of pedagogic diagnostics is based on a general, interdisciplinary approach. Enhanced durability of the acquired knowledge proves the transition of that knowledge from short-term memory storage into long-term memory of pupils and students, which justifies the conclusion that didactic plays have beneficial effects for the betterment of learners’ cognitive skills. The innovations in teaching enhance the motivation, creativity and independent cognitive activity in the process of acquiring the material thought. The innovative methods allow for untraditional means for assessing the level of knowledge acquisition. This makes possible the timely discovery of knowledge gaps and the introduction of compensatory techniques, which in turn leads to deeper and more durable acquisition of knowledge.Keywords: objectivity, reliability and validity of pedagogical diagnostic parameters introduced in the framework of a specific research
Procedia PDF Downloads 39317891 Optimization of Friction Stir Welding Parameters for Joining Aluminium Alloys using Response Surface Methodology and Artificial Neural Network
Authors: A. M. Khourshid, A. M. El-Kassas, I. Sabry
Abstract:
The objective of this work was to investigate the mechanical properties in order to demonstrate the feasibility of friction stir welding for joining Al 6061 aluminium alloys. Welding was performed on pipe with different thickness (2, 3 and 4 mm), five rotational speeds (485, 710, 910, 1120 and 1400 rpm) and a traverse speed of 4mm/min. This work focuses on two methods which are artificial neural networks using software and Response Surface Methodology (RSM) to predict the tensile strength, the percentage of elongation and hardness of friction stir welded 6061 aluminium alloy. An Artificial Neural Network (ANN) model was developed for the analysis of the friction stir welding parameters of 6061 pipe. Tensile strength, the percentage of elongation and hardness of weld joints were predicted by taking the parameters tool rotation speed, material thickness and axial force as a function. A comparison was made between measured and predicted data. Response Surface Methodology (RSM) was also developed and the values obtained for the response tensile strength, the percentage of elongation and hardness are compared with measured values. The effect of FSW process parameters on mechanical properties of 6061 aluminium alloy has been analysed in detail.Keywords: friction stir welding, aluminium alloy, response surface methodology, artificial neural network
Procedia PDF Downloads 29317890 Raman, Atomic Force Microscopy and Mass Spectrometry for Isotopic Ratios Methods Used to Investigate Human Dentine and Enamel
Authors: Nicoleta Simona Vedeanu, Rares Stiufiuc, Dana Alina Magdas
Abstract:
A detailed knowledge of the teeth structure is mandatory to understand and explain the defects and the dental pathology, but especially to take a correct decision regarding dental prophylaxis and treatment. The present work is an alternative study to the traditional investigation methods used in dentistry, a study based on the use of modern, sensitive physical methods to investigate human enamel and dentin. For the present study, several teeth collected from patients of different ages were used for structural and dietary investigation. The samples were investigated by Raman spectroscopy for the molecular structure analysis of dentin and enamel, atomic force microscopy (AFM) to view the dental topography at the micrometric size and mass spectrometry for isotopic ratios as a fingerprint of patients’ personal diet. The obtained Raman spectra and their interpretation are in good correlation with the literature and may give medical information by comparing affected dental structures with healthy ones. AFM technique gave us the possibility to study in details the dentin and enamel surface to collect information about dental hardness or dental structural changes. δ¹³C values obtained for the studied samples can be classified in C4 category specific to young people and children diet (sweets, cereals, juices, pastry). The methods used in this attempt furnished important information about dentin and enamel structure and dietary habits and each of the three proposed methods can be extended at a larger level in the study of the teeth structure.Keywords: AFM, dentine, enamel, Raman spectroscopy
Procedia PDF Downloads 14517889 Assessment of Residual Stress on HDPE Pipe Wall Thickness
Authors: D. Sersab, M. Aberkane
Abstract:
Residual stresses, in high-density polyethylene (HDPE) pipes, result from a nonhomogeneous cooling rate that occurs between the inner and outer surfaces during the extrusion process in manufacture. Most known methods of measurements to determine the magnitude and profile of the residual stresses in the pipe wall thickness are layer removal and ring slitting method. The combined layer removal and ring slitting methods described in this paper involves measurement of the circumferential residual stresses with minimal local disturbance. The existing methods used for pipe geometry (ring slitting method) gives a single residual stress value at the bore. The layer removal method which is used more in flat plate specimen is implemented with ring slitting method. The method permits stress measurements to be made directly at different depth in the pipe wall and a well-defined residual stress profile was consequently obtained.Keywords: residual stress, layer removal, ring splitting, HDPE, wall thickness
Procedia PDF Downloads 33817888 Added Value of 3D Ultrasound Image Guided Hepatic Interventions by X Matrix Technology
Authors: Ahmed Abdel Sattar Khalil, Hazem Omar
Abstract:
Background: Image-guided hepatic interventions are integral to the management of infective and neoplastic liver lesions. Over the past decades, 2D ultrasound was used for guidance of hepatic interventions; with the recent advances in ultrasound technology, 3D ultrasound was used to guide hepatic interventions. The aim of this study was to illustrate the added value of 3D image guided hepatic interventions by x matrix technology. Patients and Methods: This prospective study was performed on 100 patients who were divided into two groups; group A included 50 patients who were managed by 2D ultrasonography probe guidance, and group B included 50 patients who were managed by 3D X matrix ultrasonography probe guidance. Thermal ablation was done for 70 patients, 40 RFA (20 by the 2D probe and 20 by the 3D x matrix probe), and 30 MWA (15 by the 2D probe and 15 by the 3D x matrix probe). Chemical ablation (PEI) was done on 20 patients (10 by the 2D probe and 10 by the 3D x matrix probe). Drainage of hepatic collections and biopsy from undiagnosed hepatic focal lesions was done on 10 patients (5 by the 2D probe and 5 by the 3D x matrix probe). Results: The efficacy of ultrasonography-guided hepatic interventions by 3D x matrix probe was higher than the 2D probe but not significantly higher, with a p-value of 0.705, 0.5428 for RFA, MWA respectively, 0.5312 for PEI, 0.2918 for drainage of hepatic collections and biopsy. The complications related to the use of the 3D X matrix probe were significantly lower than the 2D probe, with a p-value of 0.003. The timing of the procedure was shorter by the usage of 3D x matrix probe in comparison to the 2D probe with a p-value of 0.08,0.34 for RFA and PEI and significantly shorter for MWA, and drainage of hepatic collection, biopsy with a P-value of 0.02,0.001 respectively. Conclusions: 3D ultrasonography-guided hepatic interventions by  x matrix probe have better efficacy, less complication, and shorter time of procedure than the 2D ultrasonography-guided hepatic interventions.Keywords: 3D, X matrix, 2D, ultrasonography, MWA, RFA, PEI, drainage of hepatic collections, biopsy
Procedia PDF Downloads 9517887 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 14117886 Design and Testing of Electrical Capacitance Tomography Sensors for Oil Pipeline Monitoring
Authors: Sidi M. A. Ghaly, Mohammad O. Khan, Mohammed Shalaby, Khaled A. Al-Snaie
Abstract:
Electrical capacitance tomography (ECT) is a valuable, non-invasive technique used to monitor multiphase flow processes, especially within industrial pipelines. This study focuses on the design, testing, and performance comparison of ECT sensors configured with 8, 12, and 16 electrodes, aiming to evaluate their effectiveness in imaging accuracy, resolution, and sensitivity. Each sensor configuration was designed to capture the spatial permittivity distribution within a pipeline cross-section, enabling visualization of phase distribution and flow characteristics such as oil and water interactions. The sensor designs were implemented and tested in closed pipes to assess their response to varying flow regimes. Capacitance data collected from each electrode configuration were reconstructed into cross-sectional images, enabling a comparison of image resolution, noise levels, and computational demands. Results indicate that the 16-electrode configuration yields higher image resolution and sensitivity to phase boundaries compared to the 8- and 12-electrode setups, making it more suitable for complex flow visualization. However, the 8 and 12-electrode sensors demonstrated advantages in processing speed and lower computational requirements. This comparative analysis provides critical insights into optimizing ECT sensor design based on specific industrial requirements, from high-resolution imaging to real-time monitoring needs.Keywords: capacitance tomography, modeling, simulation, electrode, permittivity, fluid dynamics, imaging sensitivity measurement
Procedia PDF Downloads 1117885 Method for Selecting and Prioritising Smart Services in Manufacturing Companies
Authors: Till Gramberg, Max Kellner, Erwin Gross
Abstract:
This paper presents a comprehensive investigation into the topic of smart services and IIoT-Platforms, focusing on their selection and prioritization in manufacturing organizations. First, a literature review is conducted to provide a basic understanding of the current state of research in the area of smart services. Based on discussed and established definitions, a definition approach for this paper is developed. In addition, value propositions for smart services are identified based on the literature and expert interviews. Furthermore, the general requirements for the provision of smart services are presented. Subsequently, existing approaches for the selection and development of smart services are identified and described. In order to determine the requirements for the selection of smart services, expert opinions from successful companies that have already implemented smart services are collected through semi-structured interviews. Based on the results, criteria for the evaluation of existing methods are derived. The existing methods are then evaluated according to the identified criteria. Furthermore, a novel method for the selection of smart services in manufacturing companies is developed, taking into account the identified criteria and the existing approaches. The developed concept for the method is verified in expert interviews. The method includes a collection of relevant smart services identified in the literature. The actual relevance of the use cases in the industrial environment was validated in an online survey. The required data and sensors are assigned to the smart service use cases. The value proposition of the use cases is evaluated in an expert workshop using different indicators. Based on this, a comparison is made between the identified value proposition and the required data, leading to a prioritization process. The prioritization process follows an established procedure for evaluating technical decision-making processes. In addition to the technical requirements, the prioritization process includes other evaluation criteria such as the economic benefit, the conformity of the new service offering with the company strategy, or the customer retention enabled by the smart service. Finally, the method is applied and validated in an industrial environment. The results of these experiments are critically reflected upon and an outlook on future developments in the area of smart services is given. This research contributes to a deeper understanding of the selection and prioritization process as well as the technical considerations associated with smart service implementation in manufacturing organizations. The proposed method serves as a valuable guide for decision makers, helping them to effectively select the most appropriate smart services for their specific organizational needs.Keywords: smart services, IIoT, industrie 4.0, IIoT-platform, big data
Procedia PDF Downloads 8917884 Dynamic Construction Site Layout Using Ant Colony Optimization
Authors: Yassir AbdelRazig
Abstract:
Evolutionary optimization methods such as genetic algorithms have been used extensively for the construction site layout problem. More recently, ant colony optimization algorithms, which are evolutionary methods based on the foraging behavior of ants, have been successfully applied to benchmark combinatorial optimization problems. This paper proposes a formulation of the site layout problem in terms of a sequencing problem that is suitable for solution using an ant colony optimization algorithm. In the construction industry, site layout is a very important planning problem. The objective of site layout is to position temporary facilities both geographically and at the correct time such that the construction work can be performed satisfactorily with minimal costs and improved safety and working environment. During the last decade, evolutionary methods such as genetic algorithms have been used extensively for the construction site layout problem. This paper proposes an ant colony optimization model for construction site layout. A simple case study for a highway project is utilized to illustrate the application of the model.Keywords: ant colony, construction site layout, optimization, genetic algorithms
Procedia PDF Downloads 38317883 Comparison of the Effect of Semi-Rigid Ankle Bracing Performance among Ankle Injured Versus Non-Injured Adolescent Female Hockey Players
Authors: T. J. Ellapen, N. Acampora, S. Dawson, J. Arling, C. Van Niekerk, H. J. Van Heerden
Abstract:
Objectives: To determine the comparative proprioceptive performance of injured versus non-injured adolescent female hockey players when wearing an ankle brace. Methods: Data were collected from 100 high school players who belonged to the Highway Secondary School KZN Hockey league via voluntary parental informed consent and player assent. Players completed an injury questionnaire probing the prevalence and nature of hockey injuries (March-August 2013). Subsequently players completed a Biodex proprioceptive test with and without an ankle brace. Probability was set at p≤ 0.05. Results: Twenty-two players sustained ankle injuries within the six months (p<0.001). Injured players performed similarly without bracing Right Anterior Posterior Index (RAPI): 2.8±0.9; Right Medial Lateral Index (RMLI): 1.9±0.7; Left Anterior Posterior Index (LAPI) LAPI: 2.7; Left Medial Lateral Index (LMLI): 1.7±0.6) as compared to bracing (RAPI: 2.7±1.4; RMLI: 1.8±0.6; LAPI: 2.6±1.0; LMLI: 1.5±0.6) (p>0.05). However, bracing (RAPI: 2.2±0.8; RMLI: 1.5±0.5; LAPI: 2.4±0.9; MLI: 1.5±0.5) improved the ankle stability of the non-injured group as compared to their unbraced performance (RAPI: 2.5±1.0; RMLI: 1.8±0.8; LAPI: 2.8±1.1; LMLI: 1.8±0.6) (p<0.05). Conclusion: Ankle bracing did not enhance the stability of injured ankles. However ankle bracing has an ergogenic effect enhancing the stability of healthy ankles.Keywords: hockey, proprioception, ankle, bracing
Procedia PDF Downloads 34917882 Effect of Dehydration Methods of the Proximate Composition, Mineral Content and Functional Properties of Starch Flour Extracted from Maize
Authors: Olakunle M. Makanjuola, Adebola Ajayi
Abstract:
Effect of the dehydrated method on proximate, functional and mineral properties of corn starch was evaluated. The study was carried and to determine the proximate, functional and mineral properties of corn starch produced using three different drying methods namely (sun) (oven) and (cabinet) drying methods. The corn starch was obtained by cleaning, steeping, milling, sieving, dewatering and drying corn starch was evaluated for proximate composition, functional properties, and mineral properties to determine the nutritional properties, moisture, crude protein, crude fat, ash, and carbohydrate were in the range of 9.35 to 12.16, 6.5 to 10.78 1.08 to 2.5, 1.08 to 2.5, 4.0 to 5.2, 69.58 to 75.8% respectively. Bulk density range between 0.610g/dm3 to 0.718 g/dm3, water, and oil absorption capacities range between 116.5 to 117.25 and 113.8 to 117.25 ml/g respectively. Swelling powder had value varying from 1.401 to 1.544g/g respectively. The results indicate that the cabinet method had the best result item of the quality attribute.Keywords: starch flour, maize, dehydration, cabinet dryer
Procedia PDF Downloads 23817881 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries
Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman
Abstract:
There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems
Procedia PDF Downloads 14917880 Between Efficacy and Danger: Narratives of Female University Students about Emergency Contraception Methods
Authors: Anthony Idowu Ajayi, Ezebunwa Ethelbert Nwokocha, Wilson Akpan, Oladele Vincent Adeniyi
Abstract:
Studies on emergency contraception (EC) mostly utilise quantitative methods and focus on medically approved drugs for the prevention of unwanted pregnancies. This methodological bias necessarily obscures insider perspectives on sexual behaviour, particularly on why specific methods are utilized by women who seek to prevent unplanned pregnancies. In order to privilege this perspective, with a view to further enriching the discourse and policy on the prevention and management of unplanned pregnancies, this paper brings together the findings from several focus groups and in-depth interviews conducted amongst unmarried female undergraduate students in two Nigerian universities. The study found that while the research participants had good knowledge of the consequences of unprotected sexual intercourses - with abstinence and condom widely used - participants’ willingness to rely only on medically sound measures to prevent unwanted pregnancies was not always mediated by such knowledge. Some of the methods favored by participants appeared to be those commonly associated with people of low socio-economic status in the society where the study was conducted. Medically unsafe concoctions, some outright dangerous, were widely believed to be efficacious in preventing unwanted pregnancy. Furthermore, respondents’ narratives about their sexual behaviour revealed that inadequate sex education, socio-economic pressures, and misconceptions about the efficacy of “crude” emergency contraception methods were all interrelated. The paper therefore suggests that these different facets of the unplanned pregnancy problem should be the focus of intervention.Keywords: unplanned pregnancy, unsafe abortion, emergency contraception, concoctions
Procedia PDF Downloads 425