Search results for: Sinc Numerical Methods
356 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators
Authors: Wei Zhang
Abstract:
With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895355 Inquiry on the Improvement Teaching Quality in the Classroom with Meta-Teaching Skills
Authors: Shahlan Surat, Saemah Rahman, Saadiah Kummin
Abstract:
When teachers reflect and evaluate whether their teaching methods actually have an impact on students’ learning, they will adjust their practices accordingly. This inevitably improves their students’ learning and performance. The approach in meta-teaching can invigorate and create a passion for teaching. It thus helps to increase the commitment and love for the teaching profession. This study was conducted to determine the level of metacognitive thinking of teachers in the process of teaching and learning in the classroom. Metacognitive thinking teachers include the use of metacognitive knowledge which consists of different types of knowledge: declarative, procedural and conditional. The ability of the teachers to plan, monitor and evaluate the teaching process can also be determined. This study was conducted on 377 graduate teachers in Klang Valley, Malaysia. The stratified sampling method was selected for the purpose of this study. The metacognitive teaching inventory consisting of 24 items is called InKePMG (Teacher Indicators of Effectiveness Meta-Teaching). The results showed the level of mean is high for two components of metacognitive knowledge; declarative knowledge (mean = 4.16) and conditional (mean = 4.11) whereas, the mean of procedural knowledge is 4.00 (moderately high). Similarly, the level of knowledge in monitoring (mean = 4.11), evaluating (mean = 4.00) which indicate high score and planning (mean = 4.00) are moderately high score among teachers. In conclusion, this study shows that the planning and procedural knowledge is an important element in improving the quality of teachers teaching in the classroom. Thus, the researcher recommended that further studies should focus on training programs for teachers on metacognitive skills and also on developing creative thinking among teachers.Keywords: Metacognitive thinking skills, procedural knowledge, conditional knowledge, declarative knowledge, meta-teaching and regulation of cognitive.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436354 Possible Number of Dwelling Units Using Waste Plastic Bottle for Construction
Authors: Dibya Jivan Pati, Kazuhisa Iki, Riken Homma
Abstract:
Unlike other metro cities of India, Bhubaneswar–the capital city of Odisha, is expected to reach 1-million-mark population by now. The demands of dwelling unit requirement mostly among urban poor belonging to Economically Weaker section (EWS) and Low Income groups (LIG) is becoming a challenge due to high housing cost and rents. As a matter of fact, it’s also noted that, with increase in population, the solid waste generation also increases subsequently affecting the environment due to inefficiency in collection of waste by local government bodies. Methods of utilizing Solid Waste - especially in form of Plastic bottles, Glass bottles and Metal cans (PGM) are now widely used as an alternative material for construction of low-cost building by Non-Government Organizations (NGOs) in developing countries like India to help the urban poor afford a shelter. The application of disposed plastic bottle used in construction of single dwelling significantly reduces the overall cost of construction to as much as 14% compared to traditional construction material. Therefore, considering its cost-benefit result, it’s possible to provide housing to EWS and LIGs at an affordable price. In this paper, we estimated the quantity of plastic bottles generated in Bhubaneswar which further helped to estimate the possible number of single dwelling unit that can be constructed on yearly basis so as to refrain from further housing shortage. The estimation results will be practically used for planning and managing low-cost housing business by local government and NGOs.Keywords: Construction, dwelling unit, plastic bottle, solid waste generation, groups.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1084353 Distinctive Features of Legal Relations in the Area of Subsoil Use, Renewal and Protection in Ukraine
Authors: N. Maksimentseva
Abstract:
The issue of public administration in subsoil use, renewal and protection is of high importance for Ukraine since it is strongly linked to energy security of the state as well as it shall facilitate the people of Ukraine to efficiently implement its propitiatory rights towards natural resources and redistribution of national wealth. As it is stipulated in the Article 11 of the Subsoil Code of Ukraine (the Code) the authorities that administer the industry are limited to central executive bodies and local governments. In particular, it is stipulated in the Code that the Ukraine’s Cabinet of Ministers carries out public administration in geological exploration, production and protection of subsoil. Other state bodies of public administration include central public authority responsible for state environmental protection policies; central public authority in charge of implementation of state geological exploration and efficient subsoil use policies; central authority in charge of state health and safety control policies. There are also public authorities in the Autonomous Republic of Crimea; local executive bodies and other state authorities and local self-government authorities in compliance with laws of Ukraine. This article is devoted to the analysis of the legal relations in the area of public administration of subsoil use, renewal and protection in Ukraine. The main approaches to study the essence of legal relations in the named area as well as its tasks, functions and methods are analyzed. It is concluded in this article that legal relationship in the field of public administration of subsoil use, renewal and protection is characterized by specifics of its task (development of natural resources).
Keywords: Legal relations, public administration, Subsoil Code of Ukraine, subsoil use, renewal and protection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093352 Impact of Climate Shift on Rainfall and Temperature Trend in Eastern Ganga Canal Command
Authors: Radha Krishan, Deepak Khare, Bhaskar R. Nikam, Ayush Chandrakar
Abstract:
Every irrigation project is planned considering long-term historical climatic conditions; however, the prompt climatic shift and change has come out with such circumstances which were inconceivable in the past. Considering this fact, scrutiny of rainfall and temperature trend has been carried out over the command area of Eastern Ganga Canal project for pre-climate shift period and post-climate shift periods in the present study. Non-parametric Mann-Kendall and Sen’s methods have been applied to study the trends in annual rainfall, seasonal rainfall, annual rainy day, monsoonal rainy days, average annual temperature and seasonal temperature. The results showed decreasing trend of 48.11 to 42.17 mm/decade in annual rainfall and 79.78 tSo 49.67 mm/decade in monsoon rainfall in pre-climate to post-climate shift periods, respectively. The decreasing trend of 1 to 4 days/decade has been observed in annual rainy days from pre-climate to post-climate shift period. Trends in temperature revealed that there were significant decreasing trends in annual (-0.03 ºC/yr), Kharif (-0.02 ºC/yr), Rabi (-0.04 ºC/yr) and summer (-0.02 ºC/yr) season temperature during pre-climate shift period, whereas the significant increasing trend (0.02 ºC/yr) has been observed in all the four parameters during post climate shift period. These results will help project managers in understanding the climate shift and lead them to develop alternative water management strategies.
Keywords: Climate shift, Rainfall trend, temperature trend, Mann-Kendall test, Sen slope estimator, Eastern Ganga Canal command.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 717351 Methodologies for Crack Initiation in Welded Joints Applied to Inspection Planning
Authors: Guang Zou, Kian Banisoleiman, Arturo González
Abstract:
Crack initiation and propagation threatens structural integrity of welded joints and normally inspections are assigned based on crack propagation models. However, the approach based on crack propagation models may not be applicable for some high-quality welded joints, because the initial flaws in them may be so small that it may take long time for the flaws to develop into a detectable size. This raises a concern regarding the inspection planning of high-quality welded joins, as there is no generally acceptable approach for modeling the whole fatigue process that includes the crack initiation period. In order to address the issue, this paper reviews treatment methods for crack initiation period and initial crack size in crack propagation models applied to inspection planning. Generally, there are four approaches, by: 1) Neglecting the crack initiation period and fitting a probabilistic distribution for initial crack size based on statistical data; 2) Extrapolating the crack propagation stage to a very small fictitious initial crack size, so that the whole fatigue process can be modeled by crack propagation models; 3) Assuming a fixed detectable initial crack size and fitting a probabilistic distribution for crack initiation time based on specimen tests; and, 4) Modeling the crack initiation and propagation stage separately using small crack growth theories and Paris law or similar models. The conclusion is that in view of trade-off between accuracy and computation efforts, calibration of a small fictitious initial crack size to S-N curves is the most efficient approach.
Keywords: Crack initiation, fatigue reliability, inspection planning, welded joints.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397350 Software Product Quality Evaluation Model with Multiple Criteria Decision Making Analysis
Authors: C. Ardil
Abstract:
This paper presents a software product quality evaluation model based on the ISO/IEC 25010 quality model. The evaluation characteristics and sub characteristics were identified from the ISO/IEC 25010 quality model. The multidimensional structure of the quality model is based on characteristics such as functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability, and associated sub characteristics. Random numbers are generated to establish the decision maker’s importance weights for each sub characteristics. Also, random numbers are generated to establish the decision matrix of the decision maker’s final scores for each software product against each sub characteristics. Thus, objective criteria importance weights and index scores for datasets were obtained from the random numbers. In the proposed model, five different software product quality evaluation datasets under three different weight vectors were applied to multiple criteria decision analysis method, preference analysis for reference ideal solution (PARIS) for comparison, and sensitivity analysis procedure. This study contributes to provide a better understanding of the application of MCDMA methods and ISO/IEC 25010 quality model guidelines in software product quality evaluation process.
Keywords: ISO/IEC 25010 quality model, multiple criteria decisions making, multiple criteria decision making analysis, MCDMA, PARIS, Software Product Quality Evaluation Model, Software Product Quality Evaluation, Software Evaluation, Software Selection, Software
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 448349 Environmental Decision Making Model for Assessing On-Site Performances of Building Subcontractors
Authors: Buket Metin
Abstract:
Buildings cause a variety of loads on the environment due to activities performed at each stage of the building life cycle. Construction is the first stage that affects both the natural and built environments at different steps of the process, which can be defined as transportation of materials within the construction site, formation and preparation of materials on-site and the application of materials to realize the building subsystems. All of these steps require the use of technology, which varies based on the facilities that contractors and subcontractors have. Hence, environmental consequences of the construction process should be tackled by focusing on construction technology options used in every step of the process. This paper presents an environmental decision-making model for assessing on-site performances of subcontractors based on the construction technology options which they can supply. First, construction technologies, which constitute information, tools and methods, are classified. Then, environmental performance criteria are set forth related to resource consumption, ecosystem quality, and human health issues. Finally, the model is developed based on the relationships between the construction technology components and the environmental performance criteria. The Fuzzy Analytical Hierarchy Process (FAHP) method is used for weighting the environmental performance criteria according to environmental priorities of decision-maker(s), while the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking on-site environmental performances of subcontractors using quantitative data related to the construction technology components. Thus, the model aims to provide an insight to decision-maker(s) about the environmental consequences of the construction process and to provide an opportunity to improve the overall environmental performance of construction sites.
Keywords: Construction process, construction technology, decision making, environmental performance, subcontractors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1171348 Occurrence of Foreign Matter in Food: Applied Identification Method - Association of Official Agricultural Chemists (AOAC) and Food and Drug Administration (FDA)
Authors: E. C. Mattos, V. S. M. G. Daros, R. Dal Col, A. L. Nascimento
Abstract:
The aim of this study is to present the results of a retrospective survey on the foreign matter found in foods analyzed at the Adolfo Lutz Institute, from July 2001 to July 2015. All the analyses were conducted according to the official methods described on Association of Official Agricultural Chemists (AOAC) for the micro analytical procedures and Food and Drug Administration (FDA) for the macro analytical procedures. The results showed flours, cereals and derivatives such as baking and pasta products were the types of food where foreign matters were found more frequently followed by condiments and teas. Fragments of stored grains insects, its larvae, nets, excrement, dead mites and rodent excrement were the most foreign matter found in food. Besides, foreign matters that can cause a physical risk to the consumer’s health such as metal, stones, glass, wood were found but rarely. Miscellaneous (shell, sand, dirt and seeds) were also reported. There are a lot of extraneous materials that are considered unavoidable since are something inherent to the product itself, such as insect fragments in grains. In contrast, there are avoidable extraneous materials that are less tolerated because it is preventable with the Good Manufacturing Practice. The conclusion of this work is that although most extraneous materials found in food are considered unavoidable it is necessary to keep the Good Manufacturing Practice throughout the food processing as well as maintaining a constant surveillance of the production process in order to avoid accidents that may lead to occurrence of these extraneous materials in food.Keywords: Food contamination, extraneous materials, foreign matter, surveillance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3702347 In vitro Studies of Mucoadhesiveness and Release of Nicotinamide Oral Gels Prepared from Bioadhesive Polymers
Authors: Sarunyoo Songkro, Naranut Rajatasereekul, Nipapat Cheewasrirungrueng
Abstract:
The aim of the present study was to evaluate the mucoadhesion and the release of nicotinamide gel formulations using in vitro methods. An agar plate technique was used to investigate the adhesiveness of the gels whereas a diffusion apparatus was employed to determine the release of nicotinamide from the gels. In this respect, 10% w/w nicotinamide gels containing bioadhesive polymers: Carbopol 934P (0.5-2% w/w), hydroxypropylmethyl cellulose (HPMC) (4-10% w/w), sodium carboxymethyl cellulose (SCMC) (4-6% w/w) and methylcellulose 4000 (MC) (3-5% w/w) were prepared. The gel formulations had pH values in the range of 7.14 - 8.17, which were considered appropriate to oral mucosa application. In general, the rank order of pH values appeared to be SCMC > MC4000 > HPMC > Carbopol 934P. Types and concentrations of polymers used somewhat affected the adhesiveness. It was found that anionic polymers (Carbopol 934 and SCMC) adhered more firmly to the agar plate than the neutral polymers (HPMC and MC 4000). The formulation containing 0.5% Carbopol 934P (F1) showed the highest release rate. With the exception of the formulation F1, the neutral polymers tended to give higher relate rates than the anionic polymers. For oral tissue treatment, the optimum has to be balanced between the residence time (adhesiveness) of the formulations and the release rate of the drug. The formulations containing the anionic polymers: Carbopol 934P or SCMC possessed suitable physical properties (appearance, pH and viscosity). In addition, for anionic polymer formulations, justifiable mucoadhesive properties and reasonable release rates of nicotinamide were achieved. Accordingly, these gel formulations may be applied for the treatment of oral mucosal lesions.Keywords: Nicotinamide, bioadhesive polymer, mucoadhesiveness, release rate, gel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2692346 The Effect of Saccharomyces cerevisiae Live Yeast Culture on Microbial Nitrogen Supply to Small Intestine in Male Kivircik Yearlings Fed with Different Forage-Concentrate Ratios
Authors: N. Cetinkaya, N. H. Ozdemir
Abstract:
The aim of the study was to investigate the effect of Saccharomyces cerevisiae (SC) live yeast culture on microbial protein supply to small intestine in Kivircik male yearlings when fed with different ratio of forage and concentrate diets. Four Kivircik male yearlings with permanent rumen canula were used in the experiment. The treatments were allocated to a 4x4 Latin square design. Diet I consisted of 70% alfalfa hay and 30% concentrate, Diet II consisted of 30% alfalfa hay and 70% concentrate, Diet I and II were supplemented with a SC. Daily urine was collected and stored at -20°C until analysis. Calorimetric methods were used for the determination of urinary allantoin and creatinine levels. The estimated microbial N supply to small intestine for Diets I, I+SC, II and II+SC were 2.51, 2.64, 2.95 and 3.43 g N/d respectively. Supplementation of Diets I and II with SC significantly affected the allantoin levels in μmol/W0.75 (p<0.05). Mean creatinine values in μmol/W0.75 and allantoin:creatinine ratios were not significantly different among diets. In conclusion, supplementation with SC live yeast culture had a significant effect on urinary allantoin excretion and microbial protein supply to small intestine in Kivircik yearlings fed with high concentrate Diet II (P<0.05). Hence urinary allantoin excretion may be used as a tool for estimating microbial protein supply in Kivircık yearlings. However, further studies are necessary to understand the metabolism of Saccharomyces cerevisiae live yeast culture with different forage:concentrate ratio in Kıvırcık Yearlings.Keywords: Allantoin, creatinine, Kivircik yearling, microbial nitrogen, Saccharomyces cerevisiae.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1934345 The Use of the Limit Cycles of Dynamic Systems for Formation of Program Trajectories of Points Feet of the Anthropomorphous Robot
Authors: A. S. Gorobtsov, A. S. Polyanina, A. E. Andreev
Abstract:
The movement of points feet of the anthropomorphous robot in space occurs along some stable trajectory of a known form. A large number of modifications to the methods of control of biped robots indicate the fundamental complexity of the problem of stability of the program trajectory and, consequently, the stability of the control for the deviation for this trajectory. Existing gait generators use piecewise interpolation of program trajectories. This leads to jumps in the acceleration at the boundaries of sites. Another interpolation can be realized using differential equations with fractional derivatives. In work, the approach to synthesis of generators of program trajectories is considered. The resulting system of nonlinear differential equations describes a smooth trajectory of movement having rectilinear sites. The method is based on the theory of an asymptotic stability of invariant sets. The stability of such systems in the area of localization of oscillatory processes is investigated. The boundary of the area is a bounded closed surface. In the corresponding subspaces of the oscillatory circuits, the resulting stable limit cycles are curves having rectilinear sites. The solution of the problem is carried out by means of synthesis of a set of the continuous smooth controls with feedback. The necessary geometry of closed trajectories of movement is obtained due to the introduction of high-order nonlinearities in the control of stabilization systems. The offered method was used for the generation of trajectories of movement of point’s feet of the anthropomorphous robot. The synthesis of the robot's program movement was carried out by means of the inverse method.
Keywords: Control, limits cycle, robot, stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768344 Effects of High-Protein, Low-Energy Diet on Body Composition in Overweight and Obese Adults: A Clinical Trial
Authors: Makan Cheraghpour, Seyed Ahmad Hosseini, Damoon Ashtary-Larky, Saeed Shirali, Matin Ghanavati, Meysam Alipour
Abstract:
Background: In addition to reducing body weight, the low-calorie diets can reduce the lean body mass. It is hypothesized that in addition to reducing the body weight, the low-calorie diets can maintain the lean body mass. So, the current study aimed at evaluating the effects of high-protein diet with calorie restriction on body composition in overweight and obese individuals. Methods: 36 obese and overweight subjects were divided randomly into two groups. The first group received a normal-protein, low-energy diet (RDA), and the second group received a high-protein, low-energy diet (2×RDA). The anthropometric indices including height, weight, body mass index, body fat mass, fat free mass, and body fat percentage were evaluated before and after the study. Results: A significant reduction was observed in anthropometric indices in both groups (high-protein, low-energy diets and normal-protein, low-energy diets). In addition, more reduction in fat free mass was observed in the normal-protein, low-energy diet group compared to the high -protein, low-energy diet group. In other the anthropometric indices, significant differences were not observed between the two groups. Conclusion: Independently of the type of diet, low-calorie diet can improve the anthropometric indices, but during a weight loss, high-protein diet can help the fat free mass to be maintained.
Keywords: Diet, high-protein, body mass index, body fat percentage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1274343 The Impact of Geophagia on the Iron Status of Black South African Women
Authors: A. van Onselen, C. M. Walsh, F. J. Veldman, C. Brand
Abstract:
Objectives: To determine the nutritional status and risk factors associated with women practicing geophagia in QwaQwa, South Africa. Materials and Methods: An observational epidemiological study design was adopted which included an exposed (geophagia) and nonexposed (control) group. A food frequency questionnaire, anthropometric measurements and blood sampling were applied to determine nutritional status of participants. Logistic regression analysis was performed in order to identify factors that were likely to be associated with the practice of geophagia. Results: The mean total energy intake for the geophagia group (G) and control group (C) were 10324.31 ± 2755.00 kJ and 10763.94 ± 2556.30 kJ respectively. Both groups fell within the overweight category according to the mean Body Mass Index (BMI) of each group (G= 25.59 kg/m2; C= 25.14 kg/m2). The mean serum iron levels of the geophagia group (6.929 μmol/l) were significantly lower than that of the control group (13.75 μmol/l) (p = 0.000). Serum transferrin (G=3.23g/l; C=2.7054g/l) and serum transferrin saturation (G=8.05%; C=18.74%) levels also differed significantly between groups (p=0.00). Factors that were associated with the practice of geophagia included haemoglobin (Odds ratio (OR):14.50), serumiron (OR: 9.80), serum-ferritin (OR: 3.75), serum-transferrin (OR: 6.92) and transferrin saturation (OR: 14.50). A significant negative association (p=0.014) was found between women who were wageearners and those who were not wage-earners and the practice of geophagia (OR: 0.143; CI: 0.027; 0.755). These findings seem to indicate that a permanent income may decrease the likelihood of practising geophagia. Key Findings: Geophagia was confirmed to be a risk factor for iron deficiency in this community. The significantly strong association between geophagia and iron deficiency emphasizes the importance of identifying the practice of geophagia in women, especially during their child bearing years.Keywords: Anaemia, anthropometry, dietary intake, geophagia, iron deficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2023342 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method
Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan
Abstract:
The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.
Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345341 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based On Local Color Histograms
Authors: Mawloud Mosbah, Bachir Boucheham
Abstract:
Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.
Keywords: CBIR, Color Global Histogram, Color Local Histogram, Weak Segmentation, Euclidean Distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730340 Assessment of Breeding Soundness by Comparative Radiography and Ultrasonography of Rabbit Testes
Authors: Adenike O. Olatunji-Akioye, Emmanual B Farayola
Abstract:
In order to improve the animal protein recommended daily intake of Nigerians, there is an upsurge in breeding of hitherto shunned food animals one of which is the rabbit. Radiography and ultrasonography are tools for diagnosing disease and evaluating the anatomical architecture of parts of the body non-invasively. As the rabbit is becoming a more important food animal, to achieve improved breeding of these animals, the best of the species form a breeding stock and will usually depend on breeding soundness which may be evaluated by assessment of the male reproductive organs by these tools. Four male intact rabbits weighing between 1.2 to 1.5 kg were acquired and acclimatized for 2 weeks. Dorsoventral views of the testes were acquired using a digital radiographic machine and a 5 MHz portable ultrasound scanner was used to acquire images of the testes in longitudinal, sagittal and transverse planes. Radiographic images acquired revealed soft tissue images of the testes in all rabbits. The testes lie in individual scrotal sacs sides on both sides of the midline at the level of the caudal vertebrae and thus are superimposed by caudal vertebrae and the caudal limits of the pelvic girdle. The ultrasonographic images revealed mostly homogenously hypoechogenic testes and a hyperechogenic mediastinum testis. The dorsal and ventral poles of the testes were heterogeneously hypoechogenic and correspond to the epididymis and spermatic cord. The rabbit is unique in the ability to retract the testes particularly when stressed and so careful and stressless handling during the procedures is of paramount importance. The imaging of rabbit testes can be safely done using both imaging methods but ultrasonography is a better method of assessment and evaluation of soundness for breeding.
Keywords: Breeding soundness, rabbits, radiography, ultrasonography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 885339 Bio-Surfactant Production and Its Application in Microbial EOR
Authors: A. Rajesh Kanna, G. Suresh Kumar, Sathyanaryana N. Gummadi
Abstract:
There are various sources of energies available worldwide and among them, crude oil plays a vital role. Oil recovery is achieved using conventional primary and secondary recovery methods. In-order to recover the remaining residual oil, technologies like Enhanced Oil Recovery (EOR) are utilized which is also known as tertiary recovery. Among EOR, Microbial enhanced oil recovery (MEOR) is a technique which enables the improvement of oil recovery by injection of bio-surfactant produced by microorganisms. Bio-surfactant can retrieve unrecoverable oil from the cap rock which is held by high capillary force. Bio-surfactant is a surface active agent which can reduce the interfacial tension and reduce viscosity of oil and thereby oil can be recovered to the surface as the mobility of the oil is increased. Research in this area has shown promising results besides the method is echo-friendly and cost effective compared with other EOR techniques. In our research, on laboratory scale we produced bio-surfactant using the strain Pseudomonas putida (MTCC 2467) and injected into designed simple sand packed column which resembles actual petroleum reservoir. The experiment was conducted in order to determine the efficiency of produced bio-surfactant in oil recovery. The column was made of plastic material with 10 cm in length. The diameter was 2.5 cm. The column was packed with fine sand material. Sand was saturated with brine initially followed by oil saturation. Water flooding followed by bio-surfactant injection was done to determine the amount of oil recovered. Further, the injection of bio-surfactant volume was varied and checked how effectively oil recovery can be achieved. A comparative study was also done by injecting Triton X 100 which is one of the chemical surfactant. Since, bio-surfactant reduced surface and interfacial tension oil can be easily recovered from the porous sand packed column.
Keywords: Bio-surfactant, Bacteria, Interfacial tension, Sand column.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2777338 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products
Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad
Abstract:
The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.
Keywords: Alumina-coated magnetite nanoparticles, magnetic mixed hemimicell solid-phase extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1209337 Efficiency of Robust Heuristic Gradient Based Enumerative and Tunneling Algorithms for Constrained Integer Programming Problems
Authors: Vijaya K. Srivastava, Davide Spinello
Abstract:
This paper presents performance of two robust gradient-based heuristic optimization procedures based on 3n enumeration and tunneling approach to seek global optimum of constrained integer problems. Both these procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure, the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new-found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel solver that is provided within MS Office suite and other published results.
Keywords: Constrained integer problems, enumerative search algorithm, Heuristic algorithm, tunneling algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 800336 Determination of the Pullout/Holding Strength at the Taper-Trunnion Junction of Hip Implants
Authors: Obinna K. Ihesiulor, Krishna Shankar, Paul Smith, Alan Fien
Abstract:
Excessive fretting wear at the taper-trunnion junction (trunnionosis) apparently contributes to the high failure rates of hip implants. Implant wear and corrosion lead to the release of metal particulate debris and subsequent release of metal ions at the tapertrunnion surface. This results in a type of metal poisoning referred to as metallosis. The consequences of metal poisoning include; osteolysis (bone loss), osteoarthritis (pain), aseptic loosening of the prosthesis and revision surgery. Follow up after revision surgery, metal debris particles are commonly found in numerous locations. Background: A stable connection between the femoral ball head (taper) and stem (trunnion) is necessary to prevent relative motions and corrosion at the taper junction. Hence, the importance of component assembly cannot be over-emphasized. Therefore, the aim of this study is to determine the influence of head-stem junction assembly by press fitting and the subsequent disengagement/disassembly on the connection strength between the taper ball head and stem. Methods: CoCr femoral heads were assembled with High stainless hydrogen steel stem (trunnion) by Push-in i.e. press fit; and disengaged by pull-out test. The strength and stability of the two connections were evaluated by measuring the head pull-out forces according to ISO 7206-10 standards. Findings: The head-stem junction strength linearly increases with assembly forces.Keywords: Wear, modular hip prosthesis, taper head-stem, force assembly, force disassembly.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2454335 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining
Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride
Abstract:
In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.
Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449334 Using the Minnesota Multiphasic Personality Inventory-2 and Mini Mental State Examination-2 in Cognitive Behavioral Therapy: Case Studies
Authors: Cornelia-Eugenia Munteanu
Abstract:
From a psychological perspective, psychopathology is the area of clinical psychology that has at its core psychological assessment and psychotherapy. In day-to-day clinical practice, psychodiagnosis and psychotherapy are used independently, according to their intended purpose and their specific methods of application. The paper explores how the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and Mini Mental State Examination-2 (MMSE-2) psychological tools contribute to enhancing the effectiveness of cognitive behavioral psychotherapy (CBT). This combined approach, psychotherapy in conjunction with assessment of personality and cognitive functions, is illustrated by two cases, a severe depressive episode with psychotic symptoms and a mixed anxiety-depressive disorder. The order in which CBT, MMPI-2, and MMSE-2 were used in the diagnostic and therapeutic process was determined by the particularities of each case. In the first case, the sequence started with psychotherapy, followed by the administration of blue form MMSE-2, MMPI-2, and red form MMSE-2. In the second case, the cognitive screening with blue form MMSE-2 led to a personality assessment using MMPI-2, followed by red form MMSE-2; reapplication of the MMPI-2 due to the invalidation of the first profile, and finally, psychotherapy. The MMPI-2 protocols gathered useful information that directed the steps of therapeutic intervention: a detailed symptom picture of potentially self-destructive thoughts and behaviors otherwise undetected during the interview. The memory loss and poor concentration were confirmed by MMSE-2 cognitive screening. This combined approach, psychotherapy with psychological assessment, aligns with the trend of adaptation of the psychological services to the everyday life of contemporary man and paves the way for deepening and developing the field.
Keywords: Assessment, cognitive behavioral psychotherapy, MMPI-2, MMSE-2, psychopathology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2101333 Application of Voltage Stability Indices for Proper Placement of STATCOM under Load Increase Scenario
Authors: A. S. Telang, P. P. Bedekar
Abstract:
In today’s world, electrical energy has become an indispensable component of all aspects of modern human life. Reliability, security and stability are the key aspects of any power system. Failure to meet any of these three aspects results into a great impediment to modern life. Modern power systems are being subjected to heavily stressed conditions leading to voltage stability problems. If the voltage stability problems are not mitigated properly through proper voltage stability assessment methods, cascading events may occur which may lead to voltage collapse or blackout events. Modern FACTS devices like STATCOM are one of the measures to overcome the blackout problems. As these devices are very costly, they must be installed properly at suitable locations, mostly at weak bus. Line voltage stability indices such as FVSI, Lmn and LQP play important role for identification of a weak bus. This paper presents evaluation of these line stability indices for the assessment of reliable information about the closeness of the power system to voltage collapse. PSAT is a user-friendly MATLAB toolbox, of which CPF is an important feature which has been extensively used for the placement of STATCOM to assess the stability. Novelty of the present research work lies in that the active and reactive load has been changed simultaneously at all the load buses under consideration. MATLAB code has been developed for the same and tested successfully on various standard IEEE test systems. The results for standard IEEE14 bus test system, specifically, are presented in this paper.
Keywords: Voltage stability analysis, voltage collapse, PSAT, CPF, VSI, FVSI, Lmn, LQP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783332 Markov Chain Based QoS Support for Wireless Body Area Network Communication in Health Monitoring Services
Authors: R. A. Isabel, E. Baburaj
Abstract:
Wireless Body Area Networks (WBANs) are essential for real-time health monitoring of patients and in diagnosing of many diseases. WBANs comprise many sensors to monitor a large range of ambient conditions. Quality of Service (QoS) is a key challenge in WBAN, because the different state information of the neighboring nodes has to be monitored in an accurate manner. However, energy consumption gets increased while predicting and maintaining the exact information in highly dynamic environments. In order to reduce energy consumption and end to end delay, Markov Chain Based Quality of Service Support (MC-QoSS) method is designed in the health monitoring services of WBAN communication. The energy consumption gets reduced by forming a Markov chain with high energy nodes in the sensor networks communication path. The low energy level sensor nodes are removed using transitional probability in order to reduce end to end delay. High energy nodes are formed in the chain structure of its corresponding path to enhance communication. After choosing the communication path through high energy nodes, the packets are sent to the sink node from the source node with a higher Packet Delivery Ratio. The simulation result shows that MC-QoSS method improves the packet delivery ratio and reduces energy consumption with minimum end to end delay, compared to existing methods.
Keywords: Wireless body area networks, quality of service, Markov chain, health monitoring services.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439331 Rotor Bearing System Analysis Using the Transfer Matrix Method with Thickness Assumption of Disk and Bearing
Authors: Omid Ghasemalizadeh, Mohammad Reza Mirzaee, Hossein Sadeghi, Mohammad Taghi Ahmadian
Abstract:
There are lots of different ways to find the natural frequencies of a rotating system. One of the most effective methods which is used because of its precision and correctness is the application of the transfer matrix. By use of this method the entire continuous system is subdivided and the corresponding differential equation can be stated in matrix form. So to analyze shaft that is this paper issue the rotor is divided as several elements along the shaft which each one has its own mass and moment of inertia, which this work would create possibility of defining the named matrix. By Choosing more elements number, the size of matrix would become larger and as a result more accurate answers would be earned. In this paper the dynamics of a rotor-bearing system is analyzed, considering the gyroscopic effect. To increase the accuracy of modeling the thickness of the disk and bearings is also taken into account which would cause more complicated matrix to be solved. Entering these parameters to our modeling would change the results completely that these differences are shown in the results. As said upper, to define transfer matrix to reach the natural frequencies of probed system, introducing some elements would be one of the requirements. For the boundary condition of these elements, bearings at the end of the shaft are modeled as equivalent spring and dampers for the discretized system. Also, continuous model is used for the shaft in the system. By above considerations and using transfer matrix, exact results are taken from the calculations. Results Show that, by increasing thickness of the bearing the amplitude of vibration would decrease, but obviously the stiffness of the shaft and the natural frequencies of the system would accompany growth. Consequently it is easily understood that ignoring the influences of bearing and disk thicknesses would results not real answers.Keywords: Rotor System, Disk and Bearing Thickness, Transfer Matrix, Amplitude.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548330 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function
Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos
Abstract:
Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967329 Efficiency of Wood Vinegar Mixed with Some Plants Extract against the Housefly (Musca domestica L.)
Authors: U. Pangnakorn, S. Kanlaya
Abstract:
The efficiency of wood vinegar mixed with each individual of three plants extract such as: citronella grass (Cymbopogon nardus), neem seed (Azadirachta indica A. Juss), and yam bean seed (Pachyrhizus erosus Urb.) were tested against the second instar larvae of housefly (Musca domestica L.). Steam distillation was used for extraction of the citronella grass while neem and yam bean were simple extracted by fermentation with ethyl alcohol. Toxicity test was evaluated in laboratory based on two methods of larvicidal bioassay: topical application method (contact poison) and feeding method (stomach poison). Larval mortality was observed daily and larval survivability was recorded until the survived larvae developed to pupae and adults. The study resulted that treatment of wood vinegar mixed with citronella grass showed the highest larval mortality by topical application method (50.0%) and by feeding method (80.0%). However, treatment of mixed wood vinegar and neem seed showed the longest pupal duration to 25 day and 32 days for topical application method and feeding method respectively. Additional, larval duration on treated M. domestica larvae was extended to 13 days for topical application method and 11 days for feeding method. Thus, the feeding method gave higher efficiency compared with the topical application method.
Keywords: Housefly (Musca domestica L.), neem seed (Azadirachta indica), citronella grass (Cymbopogon nardus) yam bean seed (Pachyrhizus erosus), mortality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3551328 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms
Authors: J. Prakash, K. Rajesh
Abstract:
In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2641327 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms
Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov
Abstract:
The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems do not scale well on cluster containing multiple Central Processing Units (multi-CPUs cluster) or cluster containing multiple Graphics Processing Units (multi-GPUs cluster). For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration, instead of two for standard CG (Conjugate Gradient). The standard and pipelined CG methods need the vector entries generated by current GPU and other GPUs for matrix-vector product. So the communication between GPUs becomes a major performance bottleneck on miltiGPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.
Keywords: Conjugate Gradient, GPU, parallel programming, pipelined algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 371