Search results for: tracing methods.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4043

Search results for: tracing methods.

383 Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

The widespread popularity of mobile devices and the development of artificial intelligence (AI) have led to the widespread adoption of deep learning (DL) in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace, a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Additionally, we propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. Using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We conduct an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace outperformed FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: Mobile computing, deep learning apps, sensitive information, static analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 596
382 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG

Authors: Mamta Garg

Abstract:

While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.

Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
381 Effect of Cold, Warm or Contrast Therapy on Controlling Knee Osteoarthritis Associated Problems

Authors: Amal E. Shehata, Manal E. Fareed

Abstract:

Osteoarthritis (OA) is the most prevalent and far common debilitating form of arthritis which can be defined as a degenerative condition affecting synovial joint. Patients suffering from osteoarthritis often complain of dull ache pain on movement. Physical agents can fight the painful process when correctly indicated and used such as heat or cold therapy Aim. This study was carried out to: Compare the effect of cold, warm and contrast therapy on controlling knee osteoarthritis associated problems. Setting: The study was carried out in orthopedic outpatient clinics of Menoufia University and teaching Hospitals, Egypt. Sample: A convenient sample of 60 adult patients with unilateral knee osteoarthritis. Tools: three tools were utilized to collect the data. Tool I : An interviewing questionnaire. It comprised of three parts covering  sociodemographic data, medical data and adverse effects of the treatment protocol. Tool II : Knee Injury and Osteoarthritis Outcome Score (KOOS) It consists of five main parts. Tool II1 : 0-10 Numeric pain rating scale. Results: reveled that the total knee symptoms score was decreased from moderate symptoms pre intervention to mild symptoms after warm and contrast method of therapy, but the contrast therapy had significant effect in reducing the knee symptoms and pain than the other symptoms. Conclusions: all of the three methods of therapy resulted in improvement in all knee symptoms and pain but the most appropriate protocol of treatment to relive symptoms and pain was contrast therapy.

Keywords: Knee Osteoarthritis, Cold, Warm and Contrast Therapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5733
380 Evaluation of Chiller Power Consumption Using Grey Prediction

Authors: Tien-Shun Chan, Yung-Chung Chang, Cheng-Yu Chu, Wen-Hui Chen, Yuan-Lin Chen, Shun-Chong Wang, Chang-Chun Wang

Abstract:

98% of the energy needed in Taiwan has been imported. The prices of petroleum and electricity have been increasing. In addition, facility capacity, amount of electricity generation, amount of electricity consumption and number of Taiwan Power Company customers have continued to increase. For these reasons energy conservation has become an important topic. In the past linear regression was used to establish the power consumption models for chillers. In this study, grey prediction is used to evaluate the power consumption of a chiller so as to lower the total power consumption at peak-load (so that the relevant power providers do not need to keep on increasing their power generation capacity and facility capacity). In grey prediction, only several numerical values (at least four numerical values) are needed to establish the power consumption models for chillers. If PLR, the temperatures of supply chilled-water and return chilled-water, and the temperatures of supply cooling-water and return cooling-water are taken into consideration, quite accurate results (with the accuracy close to 99% for short-term predictions) may be obtained. Through such methods, we can predict whether the power consumption at peak-load will exceed the contract power capacity signed by the corresponding entity and Taiwan Power Company. If the power consumption at peak-load exceeds the power demand, the temperature of the supply chilled-water may be adjusted so as to reduce the PLR and hence lower the power consumption.

Keywords: Gery system theory, grey prediction, chller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2578
379 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895
378 Inquiry on the Improvement Teaching Quality in the Classroom with Meta-Teaching Skills

Authors: Shahlan Surat, Saemah Rahman, Saadiah Kummin

Abstract:

When teachers reflect and evaluate whether their teaching methods actually have an impact on students’ learning, they will adjust their practices accordingly. This inevitably improves their students’ learning and performance. The approach in meta-teaching can invigorate and create a passion for teaching. It thus helps to increase the commitment and love for the teaching profession. This study was conducted to determine the level of metacognitive thinking of teachers in the process of teaching and learning in the classroom. Metacognitive thinking teachers include the use of metacognitive knowledge which consists of different types of knowledge: declarative, procedural and conditional. The ability of the teachers to plan, monitor and evaluate the teaching process can also be determined. This study was conducted on 377 graduate teachers in Klang Valley, Malaysia. The stratified sampling method was selected for the purpose of this study. The metacognitive teaching inventory consisting of 24 items is called InKePMG (Teacher Indicators of Effectiveness Meta-Teaching). The results showed the level of mean is high for two components of metacognitive knowledge; declarative knowledge (mean = 4.16) and conditional (mean = 4.11) whereas, the mean of procedural knowledge is 4.00 (moderately high). Similarly, the level of knowledge in monitoring (mean = 4.11), evaluating (mean = 4.00) which indicate high score and planning (mean = 4.00) are moderately high score among teachers. In conclusion, this study shows that the planning and procedural knowledge is an important element in improving the quality of teachers teaching in the classroom. Thus, the researcher recommended that further studies should focus on training programs for teachers on metacognitive skills and also on developing creative thinking among teachers.

Keywords: Metacognitive thinking skills, procedural knowledge, conditional knowledge, declarative knowledge, meta-teaching and regulation of cognitive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
377 Possible Number of Dwelling Units Using Waste Plastic Bottle for Construction

Authors: Dibya Jivan Pati, Kazuhisa Iki, Riken Homma

Abstract:

Unlike other metro cities of India, Bhubaneswar–the capital city of Odisha, is expected to reach 1-million-mark population by now. The demands of dwelling unit requirement mostly among urban poor belonging to Economically Weaker section (EWS) and Low Income groups (LIG) is becoming a challenge due to high housing cost and rents. As a matter of fact, it’s also noted that, with increase in population, the solid waste generation also increases subsequently affecting the environment due to inefficiency in collection of waste by local government bodies. Methods of utilizing Solid Waste - especially in form of Plastic bottles, Glass bottles and Metal cans (PGM) are now widely used as an alternative material for construction of low-cost building by Non-Government Organizations (NGOs) in developing countries like India to help the urban poor afford a shelter. The application of disposed plastic bottle used in construction of single dwelling significantly reduces the overall cost of construction to as much as 14% compared to traditional construction material. Therefore, considering its cost-benefit result, it’s possible to provide housing to EWS and LIGs at an affordable price. In this paper, we estimated the quantity of plastic bottles generated in Bhubaneswar which further helped to estimate the possible number of single dwelling unit that can be constructed on yearly basis so as to refrain from further housing shortage. The estimation results will be practically used for planning and managing low-cost housing business by local government and NGOs.

Keywords: Construction, dwelling unit, plastic bottle, solid waste generation, groups.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1084
376 Distinctive Features of Legal Relations in the Area of Subsoil Use, Renewal and Protection in Ukraine

Authors: N. Maksimentseva

Abstract:

The issue of public administration in subsoil use, renewal and protection is of high importance for Ukraine since it is strongly linked to energy security of the state as well as it shall facilitate the people of Ukraine to efficiently implement its propitiatory rights towards natural resources and redistribution of national wealth. As it is stipulated in the Article 11 of the Subsoil Code of Ukraine (the Code) the authorities that administer the industry are limited to central executive bodies and local governments. In particular, it is stipulated in the Code that the Ukraine’s Cabinet of Ministers carries out public administration in geological exploration, production and protection of subsoil. Other state bodies of public administration include central public authority responsible for state environmental protection policies; central public authority in charge of implementation of state geological exploration and efficient subsoil use policies; central authority in charge of state health and safety control policies. There are also public authorities in the Autonomous Republic of Crimea; local executive bodies and other state authorities and local self-government authorities in compliance with laws of Ukraine. This article is devoted to the analysis of the legal relations in the area of public administration of subsoil use, renewal and protection in Ukraine. The main approaches to study the essence of legal relations in the named area as well as its tasks, functions and methods are analyzed. It is concluded in this article that legal relationship in the field of public administration of subsoil use, renewal and protection is characterized by specifics of its task (development of natural resources).

Keywords: Legal relations, public administration, Subsoil Code of Ukraine, subsoil use, renewal and protection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093
375 Impact of Climate Shift on Rainfall and Temperature Trend in Eastern Ganga Canal Command

Authors: Radha Krishan, Deepak Khare, Bhaskar R. Nikam, Ayush Chandrakar

Abstract:

Every irrigation project is planned considering long-term historical climatic conditions; however, the prompt climatic shift and change has come out with such circumstances which were inconceivable in the past. Considering this fact, scrutiny of rainfall and temperature trend has been carried out over the command area of Eastern Ganga Canal project for pre-climate shift period and post-climate shift periods in the present study. Non-parametric Mann-Kendall and Sen’s methods have been applied to study the trends in annual rainfall, seasonal rainfall, annual rainy day, monsoonal rainy days, average annual temperature and seasonal temperature. The results showed decreasing trend of 48.11 to 42.17 mm/decade in annual rainfall and 79.78 tSo 49.67 mm/decade in monsoon rainfall in pre-climate to post-climate shift periods, respectively. The decreasing trend of 1 to 4 days/decade has been observed in annual rainy days from pre-climate to post-climate shift period. Trends in temperature revealed that there were significant decreasing trends in annual (-0.03 ºC/yr), Kharif (-0.02 ºC/yr), Rabi (-0.04 ºC/yr) and summer (-0.02 ºC/yr) season temperature during pre-climate shift period, whereas the significant increasing trend (0.02 ºC/yr) has been observed in all the four parameters during post climate shift period. These results will help project managers in understanding the climate shift and lead them to develop alternative water management strategies.

Keywords: Climate shift, Rainfall trend, temperature trend, Mann-Kendall test, Sen slope estimator, Eastern Ganga Canal command.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 717
374 Methodologies for Crack Initiation in Welded Joints Applied to Inspection Planning

Authors: Guang Zou, Kian Banisoleiman, Arturo González

Abstract:

Crack initiation and propagation threatens structural integrity of welded joints and normally inspections are assigned based on crack propagation models. However, the approach based on crack propagation models may not be applicable for some high-quality welded joints, because the initial flaws in them may be so small that it may take long time for the flaws to develop into a detectable size. This raises a concern regarding the inspection planning of high-quality welded joins, as there is no generally acceptable approach for modeling the whole fatigue process that includes the crack initiation period. In order to address the issue, this paper reviews treatment methods for crack initiation period and initial crack size in crack propagation models applied to inspection planning. Generally, there are four approaches, by: 1) Neglecting the crack initiation period and fitting a probabilistic distribution for initial crack size based on statistical data; 2) Extrapolating the crack propagation stage to a very small fictitious initial crack size, so that the whole fatigue process can be modeled by crack propagation models; 3) Assuming a fixed detectable initial crack size and fitting a probabilistic distribution for crack initiation time based on specimen tests; and, 4) Modeling the crack initiation and propagation stage separately using small crack growth theories and Paris law or similar models. The conclusion is that in view of trade-off between accuracy and computation efforts, calibration of a small fictitious initial crack size to S-N curves is the most efficient approach.

Keywords: Crack initiation, fatigue reliability, inspection planning, welded joints.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397
373 Software Product Quality Evaluation Model with Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This paper presents a software product quality evaluation model based on the ISO/IEC 25010 quality model. The evaluation characteristics and sub characteristics were identified from the ISO/IEC 25010 quality model. The multidimensional structure of the quality model is based on characteristics such as functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability, and associated sub characteristics. Random numbers are generated to establish the decision maker’s importance weights for each sub characteristics. Also, random numbers are generated to establish the decision matrix of the decision maker’s final scores for each software product against each sub characteristics. Thus, objective criteria importance weights and index scores for datasets were obtained from the random numbers. In the proposed model, five different software product quality evaluation datasets under three different weight vectors were applied to multiple criteria decision analysis method, preference analysis for reference ideal solution (PARIS) for comparison, and sensitivity analysis procedure. This study contributes to provide a better understanding of the application of MCDMA methods and ISO/IEC 25010 quality model guidelines in software product quality evaluation process.

Keywords: ISO/IEC 25010 quality model, multiple criteria decisions making, multiple criteria decision making analysis, MCDMA, PARIS, Software Product Quality Evaluation Model, Software Product Quality Evaluation, Software Evaluation, Software Selection, Software

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 448
372 Environmental Decision Making Model for Assessing On-Site Performances of Building Subcontractors

Authors: Buket Metin

Abstract:

Buildings cause a variety of loads on the environment due to activities performed at each stage of the building life cycle. Construction is the first stage that affects both the natural and built environments at different steps of the process, which can be defined as transportation of materials within the construction site, formation and preparation of materials on-site and the application of materials to realize the building subsystems. All of these steps require the use of technology, which varies based on the facilities that contractors and subcontractors have. Hence, environmental consequences of the construction process should be tackled by focusing on construction technology options used in every step of the process. This paper presents an environmental decision-making model for assessing on-site performances of subcontractors based on the construction technology options which they can supply. First, construction technologies, which constitute information, tools and methods, are classified. Then, environmental performance criteria are set forth related to resource consumption, ecosystem quality, and human health issues. Finally, the model is developed based on the relationships between the construction technology components and the environmental performance criteria. The Fuzzy Analytical Hierarchy Process (FAHP) method is used for weighting the environmental performance criteria according to environmental priorities of decision-maker(s), while the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking on-site environmental performances of subcontractors using quantitative data related to the construction technology components. Thus, the model aims to provide an insight to decision-maker(s) about the environmental consequences of the construction process and to provide an opportunity to improve the overall environmental performance of construction sites.

Keywords: Construction process, construction technology, decision making, environmental performance, subcontractors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1171
371 Dynamic Analysis of Porous Media Using Finite Element Method

Authors: M. Pasbani Khiavi, A. R. M. Gharabaghi, K. Abedi

Abstract:

The mechanical behavior of porous media is governed by the interaction between its solid skeleton and the fluid existing inside its pores. The interaction occurs through the interface of gains and fluid. The traditional analysis methods of porous media, based on the effective stress and Darcy's law, are unable to account for these interactions. For an accurate analysis, the porous media is represented in a fluid-filled porous solid on the basis of the Biot theory of wave propagation in poroelastic media. In Biot formulation, the equations of motion of the soil mixture are coupled with the global mass balance equations to describe the realistic behavior of porous media. Because of irregular geometry, the domain is generally treated as an assemblage of fmite elements. In this investigation, the numerical formulation for the field equations governing the dynamic response of fluid-saturated porous media is analyzed and employed for the study of transient wave motion. A finite element model is developed and implemented into a computer code called DYNAPM for dynamic analysis of porous media. The weighted residual method with 8-node elements is used for developing of a finite element model and the analysis is carried out in the time domain considering the dynamic excitation and gravity loading. Newmark time integration scheme is developed to solve the time-discretized equations which are an unconditionally stable implicit method Finally, some numerical examples are presented to show the accuracy and capability of developed model for a wide variety of behaviors of porous media.

Keywords: Dynamic analysis, Interaction, Porous media, time domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1876
370 Occurrence of Foreign Matter in Food: Applied Identification Method - Association of Official Agricultural Chemists (AOAC) and Food and Drug Administration (FDA)

Authors: E. C. Mattos, V. S. M. G. Daros, R. Dal Col, A. L. Nascimento

Abstract:

The aim of this study is to present the results of a retrospective survey on the foreign matter found in foods analyzed at the Adolfo Lutz Institute, from July 2001 to July 2015. All the analyses were conducted according to the official methods described on Association of Official Agricultural Chemists (AOAC) for the micro analytical procedures and Food and Drug Administration (FDA) for the macro analytical procedures. The results showed flours, cereals and derivatives such as baking and pasta products were the types of food where foreign matters were found more frequently followed by condiments and teas. Fragments of stored grains insects, its larvae, nets, excrement, dead mites and rodent excrement were the most foreign matter found in food. Besides, foreign matters that can cause a physical risk to the consumer’s health such as metal, stones, glass, wood were found but rarely. Miscellaneous (shell, sand, dirt and seeds) were also reported. There are a lot of extraneous materials that are considered unavoidable since are something inherent to the product itself, such as insect fragments in grains. In contrast, there are avoidable extraneous materials that are less tolerated because it is preventable with the Good Manufacturing Practice. The conclusion of this work is that although most extraneous materials found in food are considered unavoidable it is necessary to keep the Good Manufacturing Practice throughout the food processing as well as maintaining a constant surveillance of the production process in order to avoid accidents that may lead to occurrence of these extraneous materials in food.

Keywords: Food contamination, extraneous materials, foreign matter, surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3702
369 The Effect of CPU Location in Total Immersion of Microelectronics

Authors: A. Almaneea, N. Kapur, J. L. Summers, H. M. Thompson

Abstract:

Meeting the growth in demand for digital services such as social media, telecommunications, and business and cloud services requires large scale data centres, which has led to an increase in their end use energy demand. Generally, over 30% of data centre power is consumed by the necessary cooling overhead. Thus energy can be reduced by improving the cooling efficiency. Air and liquid can both be used as cooling media for the data centre. Traditional data centre cooling systems use air, however liquid is recognised as a promising method that can handle the more densely packed data centres. Liquid cooling can be classified into three methods; rack heat exchanger, on-chip heat exchanger and full immersion of the microelectronics. This study quantifies the improvements of heat transfer specifically for the case of immersed microelectronics by varying the CPU and heat sink location. Immersion of the server is achieved by filling the gap between the microelectronics and a water jacket with a dielectric liquid which convects the heat from the CPU to the water jacket on the opposite side. Heat transfer is governed by two physical mechanisms, which is natural convection for the fixed enclosure filled with dielectric liquid and forced convection for the water that is pumped through the water jacket. The model in this study is validated with published numerical and experimental work and shows good agreement with previous work. The results show that the heat transfer performance and Nusselt number (Nu) is improved by 89% by placing the CPU and heat sink on the bottom of the microelectronics enclosure.

Keywords: CPU location, data centre cooling, heat sink in enclosures, Immersed microelectronics, turbulent natural convection in enclosures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2173
368 In vitro Studies of Mucoadhesiveness and Release of Nicotinamide Oral Gels Prepared from Bioadhesive Polymers

Authors: Sarunyoo Songkro, Naranut Rajatasereekul, Nipapat Cheewasrirungrueng

Abstract:

The aim of the present study was to evaluate the mucoadhesion and the release of nicotinamide gel formulations using in vitro methods. An agar plate technique was used to investigate the adhesiveness of the gels whereas a diffusion apparatus was employed to determine the release of nicotinamide from the gels. In this respect, 10% w/w nicotinamide gels containing bioadhesive polymers: Carbopol 934P (0.5-2% w/w), hydroxypropylmethyl cellulose (HPMC) (4-10% w/w), sodium carboxymethyl cellulose (SCMC) (4-6% w/w) and methylcellulose 4000 (MC) (3-5% w/w) were prepared. The gel formulations had pH values in the range of 7.14 - 8.17, which were considered appropriate to oral mucosa application. In general, the rank order of pH values appeared to be SCMC > MC4000 > HPMC > Carbopol 934P. Types and concentrations of polymers used somewhat affected the adhesiveness. It was found that anionic polymers (Carbopol 934 and SCMC) adhered more firmly to the agar plate than the neutral polymers (HPMC and MC 4000). The formulation containing 0.5% Carbopol 934P (F1) showed the highest release rate. With the exception of the formulation F1, the neutral polymers tended to give higher relate rates than the anionic polymers. For oral tissue treatment, the optimum has to be balanced between the residence time (adhesiveness) of the formulations and the release rate of the drug. The formulations containing the anionic polymers: Carbopol 934P or SCMC possessed suitable physical properties (appearance, pH and viscosity). In addition, for anionic polymer formulations, justifiable mucoadhesive properties and reasonable release rates of nicotinamide were achieved. Accordingly, these gel formulations may be applied for the treatment of oral mucosal lesions.

Keywords: Nicotinamide, bioadhesive polymer, mucoadhesiveness, release rate, gel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2692
367 The Effect of Saccharomyces cerevisiae Live Yeast Culture on Microbial Nitrogen Supply to Small Intestine in Male Kivircik Yearlings Fed with Different Forage-Concentrate Ratios

Authors: N. Cetinkaya, N. H. Ozdemir

Abstract:

The aim of the study was to investigate the effect of Saccharomyces cerevisiae (SC) live yeast culture on microbial protein supply to small intestine in Kivircik male yearlings when fed with different ratio of forage and concentrate diets. Four Kivircik male yearlings with permanent rumen canula were used in the experiment. The treatments were allocated to a 4x4 Latin square design. Diet I consisted of 70% alfalfa hay and 30% concentrate, Diet II consisted of 30% alfalfa hay and 70% concentrate, Diet I and II were supplemented with a SC. Daily urine was collected and stored at -20°C until analysis. Calorimetric methods were used for the determination of urinary allantoin and creatinine levels. The estimated microbial N supply to small intestine for Diets I, I+SC, II and II+SC were 2.51, 2.64, 2.95 and 3.43 g N/d respectively. Supplementation of Diets I and II with SC significantly affected the allantoin levels in μmol/W0.75 (p<0.05). Mean creatinine values in μmol/W0.75 and allantoin:creatinine ratios were not significantly different among diets. In conclusion, supplementation with SC live yeast culture had a significant effect on urinary allantoin excretion and microbial protein supply to small intestine in Kivircik yearlings fed with high concentrate Diet II (P<0.05). Hence urinary allantoin excretion may be used as a tool for estimating microbial protein supply in Kivircık yearlings. However, further studies are necessary to understand the metabolism of Saccharomyces cerevisiae live yeast culture with different forage:concentrate ratio in Kıvırcık Yearlings.

Keywords: Allantoin, creatinine, Kivircik yearling, microbial nitrogen, Saccharomyces cerevisiae.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1934
366 The Use of the Limit Cycles of Dynamic Systems for Formation of Program Trajectories of Points Feet of the Anthropomorphous Robot

Authors: A. S. Gorobtsov, A. S. Polyanina, A. E. Andreev

Abstract:

The movement of points feet of the anthropomorphous robot in space occurs along some stable trajectory of a known form. A large number of modifications to the methods of control of biped robots indicate the fundamental complexity of the problem of stability of the program trajectory and, consequently, the stability of the control for the deviation for this trajectory. Existing gait generators use piecewise interpolation of program trajectories. This leads to jumps in the acceleration at the boundaries of sites. Another interpolation can be realized using differential equations with fractional derivatives. In work, the approach to synthesis of generators of program trajectories is considered. The resulting system of nonlinear differential equations describes a smooth trajectory of movement having rectilinear sites. The method is based on the theory of an asymptotic stability of invariant sets. The stability of such systems in the area of localization of oscillatory processes is investigated. The boundary of the area is a bounded closed surface. In the corresponding subspaces of the oscillatory circuits, the resulting stable limit cycles are curves having rectilinear sites. The solution of the problem is carried out by means of synthesis of a set of the continuous smooth controls with feedback. The necessary geometry of closed trajectories of movement is obtained due to the introduction of high-order nonlinearities in the control of stabilization systems. The offered method was used for the generation of trajectories of movement of point’s feet of the anthropomorphous robot. The synthesis of the robot's program movement was carried out by means of the inverse method.

Keywords: Control, limits cycle, robot, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768
365 Effects of High-Protein, Low-Energy Diet on Body Composition in Overweight and Obese Adults: A Clinical Trial

Authors: Makan Cheraghpour, Seyed Ahmad Hosseini, Damoon Ashtary-Larky, Saeed Shirali, Matin Ghanavati, Meysam Alipour

Abstract:

Background: In addition to reducing body weight, the low-calorie diets can reduce the lean body mass. It is hypothesized that in addition to reducing the body weight, the low-calorie diets can maintain the lean body mass. So, the current study aimed at evaluating the effects of high-protein diet with calorie restriction on body composition in overweight and obese individuals. Methods: 36 obese and overweight subjects were divided randomly into two groups. The first group received a normal-protein, low-energy diet (RDA), and the second group received a high-protein, low-energy diet (2×RDA). The anthropometric indices including height, weight, body mass index, body fat mass, fat free mass, and body fat percentage were evaluated before and after the study. Results: A significant reduction was observed in anthropometric indices in both groups (high-protein, low-energy diets and normal-protein, low-energy diets). In addition, more reduction in fat free mass was observed in the normal-protein, low-energy diet group compared to the high -protein, low-energy diet group. In other the anthropometric indices, significant differences were not observed between the two groups. Conclusion: Independently of the type of diet, low-calorie diet can improve the anthropometric indices, but during a weight loss, high-protein diet can help the fat free mass to be maintained.

Keywords: Diet, high-protein, body mass index, body fat percentage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1274
364 The Impact of Geophagia on the Iron Status of Black South African Women

Authors: A. van Onselen, C. M. Walsh, F. J. Veldman, C. Brand

Abstract:

Objectives: To determine the nutritional status and risk factors associated with women practicing geophagia in QwaQwa, South Africa. Materials and Methods: An observational epidemiological study design was adopted which included an exposed (geophagia) and nonexposed (control) group. A food frequency questionnaire, anthropometric measurements and blood sampling were applied to determine nutritional status of participants. Logistic regression analysis was performed in order to identify factors that were likely to be associated with the practice of geophagia. Results: The mean total energy intake for the geophagia group (G) and control group (C) were 10324.31 ± 2755.00 kJ and 10763.94 ± 2556.30 kJ respectively. Both groups fell within the overweight category according to the mean Body Mass Index (BMI) of each group (G= 25.59 kg/m2; C= 25.14 kg/m2). The mean serum iron levels of the geophagia group (6.929 μmol/l) were significantly lower than that of the control group (13.75 μmol/l) (p = 0.000). Serum transferrin (G=3.23g/l; C=2.7054g/l) and serum transferrin saturation (G=8.05%; C=18.74%) levels also differed significantly between groups (p=0.00). Factors that were associated with the practice of geophagia included haemoglobin (Odds ratio (OR):14.50), serumiron (OR: 9.80), serum-ferritin (OR: 3.75), serum-transferrin (OR: 6.92) and transferrin saturation (OR: 14.50). A significant negative association (p=0.014) was found between women who were wageearners and those who were not wage-earners and the practice of geophagia (OR: 0.143; CI: 0.027; 0.755). These findings seem to indicate that a permanent income may decrease the likelihood of practising geophagia. Key Findings: Geophagia was confirmed to be a risk factor for iron deficiency in this community. The significantly strong association between geophagia and iron deficiency emphasizes the importance of identifying the practice of geophagia in women, especially during their child bearing years.

Keywords: Anaemia, anthropometry, dietary intake, geophagia, iron deficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2023
363 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method

Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan

Abstract:

The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.

Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345
362 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based On Local Color Histograms

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.

Keywords: CBIR, Color Global Histogram, Color Local Histogram, Weak Segmentation, Euclidean Distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
361 Assessment of Breeding Soundness by Comparative Radiography and Ultrasonography of Rabbit Testes

Authors: Adenike O. Olatunji-Akioye, Emmanual B Farayola

Abstract:

In order to improve the animal protein recommended daily intake of Nigerians, there is an upsurge in breeding of hitherto shunned food animals one of which is the rabbit. Radiography and ultrasonography are tools for diagnosing disease and evaluating the anatomical architecture of parts of the body non-invasively. As the rabbit is becoming a more important food animal, to achieve improved breeding of these animals, the best of the species form a breeding stock and will usually depend on breeding soundness which may be evaluated by assessment of the male reproductive organs by these tools. Four male intact rabbits weighing between 1.2 to 1.5 kg were acquired and acclimatized for 2 weeks. Dorsoventral views of the testes were acquired using a digital radiographic machine and a 5 MHz portable ultrasound scanner was used to acquire images of the testes in longitudinal, sagittal and transverse planes. Radiographic images acquired revealed soft tissue images of the testes in all rabbits. The testes lie in individual scrotal sacs sides on both sides of the midline at the level of the caudal vertebrae and thus are superimposed by caudal vertebrae and the caudal limits of the pelvic girdle. The ultrasonographic images revealed mostly homogenously hypoechogenic testes and a hyperechogenic mediastinum testis. The dorsal and ventral poles of the testes were heterogeneously hypoechogenic and correspond to the epididymis and spermatic cord. The rabbit is unique in the ability to retract the testes particularly when stressed and so careful and stressless handling during the procedures is of paramount importance. The imaging of rabbit testes can be safely done using both imaging methods but ultrasonography is a better method of assessment and evaluation of soundness for breeding.

Keywords: Breeding soundness, rabbits, radiography, ultrasonography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 885
360 Bio-Surfactant Production and Its Application in Microbial EOR

Authors: A. Rajesh Kanna, G. Suresh Kumar, Sathyanaryana N. Gummadi

Abstract:

There are various sources of energies available worldwide and among them, crude oil plays a vital role. Oil recovery is achieved using conventional primary and secondary recovery methods. In-order to recover the remaining residual oil, technologies like Enhanced Oil Recovery (EOR) are utilized which is also known as tertiary recovery. Among EOR, Microbial enhanced oil recovery (MEOR) is a technique which enables the improvement of oil recovery by injection of bio-surfactant produced by microorganisms. Bio-surfactant can retrieve unrecoverable oil from the cap rock which is held by high capillary force. Bio-surfactant is a surface active agent which can reduce the interfacial tension and reduce viscosity of oil and thereby oil can be recovered to the surface as the mobility of the oil is increased. Research in this area has shown promising results besides the method is echo-friendly and cost effective compared with other EOR techniques. In our research, on laboratory scale we produced bio-surfactant using the strain Pseudomonas putida (MTCC 2467) and injected into designed simple sand packed column which resembles actual petroleum reservoir. The experiment was conducted in order to determine the efficiency of produced bio-surfactant in oil recovery. The column was made of plastic material with 10 cm in length. The diameter was 2.5 cm. The column was packed with fine sand material. Sand was saturated with brine initially followed by oil saturation. Water flooding followed by bio-surfactant injection was done to determine the amount of oil recovered. Further, the injection of bio-surfactant volume was varied and checked how effectively oil recovery can be achieved. A comparative study was also done by injecting Triton X 100 which is one of the chemical surfactant. Since, bio-surfactant reduced surface and interfacial tension oil can be easily recovered from the porous sand packed column.

Keywords: Bio-surfactant, Bacteria, Interfacial tension, Sand column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2777
359 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products

Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad

Abstract:

The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.

Keywords: Alumina-coated magnetite nanoparticles, magnetic mixed hemimicell solid-phase extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1209
358 Efficiency of Robust Heuristic Gradient Based Enumerative and Tunneling Algorithms for Constrained Integer Programming Problems

Authors: Vijaya K. Srivastava, Davide Spinello

Abstract:

This paper presents performance of two robust gradient-based heuristic optimization procedures based on 3n enumeration and tunneling approach to seek global optimum of constrained integer problems. Both these procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure, the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new-found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel solver that is provided within MS Office suite and other published results.

Keywords: Constrained integer problems, enumerative search algorithm, Heuristic algorithm, tunneling algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 800
357 Determination of the Pullout/Holding Strength at the Taper-Trunnion Junction of Hip Implants

Authors: Obinna K. Ihesiulor, Krishna Shankar, Paul Smith, Alan Fien

Abstract:

Excessive fretting wear at the taper-trunnion junction (trunnionosis) apparently contributes to the high failure rates of hip implants. Implant wear and corrosion lead to the release of metal particulate debris and subsequent release of metal ions at the tapertrunnion surface. This results in a type of metal poisoning referred to as metallosis. The consequences of metal poisoning include; osteolysis (bone loss), osteoarthritis (pain), aseptic loosening of the prosthesis and revision surgery. Follow up after revision surgery, metal debris particles are commonly found in numerous locations. Background: A stable connection between the femoral ball head (taper) and stem (trunnion) is necessary to prevent relative motions and corrosion at the taper junction. Hence, the importance of component assembly cannot be over-emphasized. Therefore, the aim of this study is to determine the influence of head-stem junction assembly by press fitting and the subsequent disengagement/disassembly on the connection strength between the taper ball head and stem. Methods: CoCr femoral heads were assembled with High stainless hydrogen steel stem (trunnion) by Push-in i.e. press fit; and disengaged by pull-out test. The strength and stability of the two connections were evaluated by measuring the head pull-out forces according to ISO 7206-10 standards. Findings: The head-stem junction strength linearly increases with assembly forces.

Keywords: Wear, modular hip prosthesis, taper head-stem, force assembly, force disassembly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2454
356 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449
355 Using the Minnesota Multiphasic Personality Inventory-2 and Mini Mental State Examination-2 in Cognitive Behavioral Therapy: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

From a psychological perspective, psychopathology is the area of clinical psychology that has at its core psychological assessment and psychotherapy. In day-to-day clinical practice, psychodiagnosis and psychotherapy are used independently, according to their intended purpose and their specific methods of application. The paper explores how the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and Mini Mental State Examination-2 (MMSE-2) psychological tools contribute to enhancing the effectiveness of cognitive behavioral psychotherapy (CBT). This combined approach, psychotherapy in conjunction with assessment of personality and cognitive functions, is illustrated by two cases, a severe depressive episode with psychotic symptoms and a mixed anxiety-depressive disorder. The order in which CBT, MMPI-2, and MMSE-2 were used in the diagnostic and therapeutic process was determined by the particularities of each case. In the first case, the sequence started with psychotherapy, followed by the administration of blue form MMSE-2, MMPI-2, and red form MMSE-2. In the second case, the cognitive screening with blue form MMSE-2 led to a personality assessment using MMPI-2, followed by red form MMSE-2; reapplication of the MMPI-2 due to the invalidation of the first profile, and finally, psychotherapy. The MMPI-2 protocols gathered useful information that directed the steps of therapeutic intervention: a detailed symptom picture of potentially self-destructive thoughts and behaviors otherwise undetected during the interview. The memory loss and poor concentration were confirmed by MMSE-2 cognitive screening. This combined approach, psychotherapy with psychological assessment, aligns with the trend of adaptation of the psychological services to the everyday life of contemporary man and paves the way for deepening and developing the field.

Keywords: Assessment, cognitive behavioral psychotherapy, MMPI-2, MMSE-2, psychopathology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2102
354 Application of Voltage Stability Indices for Proper Placement of STATCOM under Load Increase Scenario

Authors: A. S. Telang, P. P. Bedekar

Abstract:

In today’s world, electrical energy has become an indispensable component of all aspects of modern human life. Reliability, security and stability are the key aspects of any power system. Failure to meet any of these three aspects results into a great impediment to modern life. Modern power systems are being subjected to heavily stressed conditions leading to voltage stability problems. If the voltage stability problems are not mitigated properly through proper voltage stability assessment methods, cascading events may occur which may lead to voltage collapse or blackout events. Modern FACTS devices like STATCOM are one of the measures to overcome the blackout problems. As these devices are very costly, they must be installed properly at suitable locations, mostly at weak bus. Line voltage stability indices such as FVSI, Lmn and LQP play important role for identification of a weak bus. This paper presents evaluation of these line stability indices for the assessment of reliable information about the closeness of the power system to voltage collapse. PSAT is a user-friendly MATLAB toolbox, of which CPF is an important feature which has been extensively used for the placement of STATCOM to assess the stability. Novelty of the present research work lies in that the active and reactive load has been changed simultaneously at all the load buses under consideration. MATLAB code has been developed for the same and tested successfully on various standard IEEE test systems. The results for standard IEEE14 bus test system, specifically, are presented in this paper.

Keywords: Voltage stability analysis, voltage collapse, PSAT, CPF, VSI, FVSI, Lmn, LQP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783