Search results for: predictors of delay
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1325

Search results for: predictors of delay

395 A Systematic Snapshot of Software Outsourcing Challenges

Authors: Issam Jebreen, Eman Al-Qbelat

Abstract:

Outsourcing software development projects can be challenging, and there are several common challenges that organizations face. A study was conducted with a sample of 46 papers on outsourcing challenges, and the results show that there are several common challenges faced by organizations when outsourcing software development projects. Poor outsourcing relationship was identified as the most significant challenge, with 35% of the papers referencing it. Lack of quality was the second most significant challenge, with 33% of the papers referencing it. Language and cultural differences were the third most significant challenge, with 24% of the papers referencing it. Non-competitive price was another challenge faced by organizations, with 21% of the papers referencing it. Poor coordination and communication were also identified as a challenge, with 21% of the papers referencing it. Opportunistic behavior, lack of contract negotiation, inadequate user involvement, and constraints due to time zone were also challenges faced by organizations. Other challenges faced by organizations included poor project management, lack of technical capabilities, vendor employee high turnover, poor requirement specification, IPR issues, poor management of budget, schedule, and delay, geopolitical and country instability, the difference in development methodologies, failure to manage end-user expectations, and poor monitoring and control. In conclusion, outsourcing software development projects can be challenging, but organizations can mitigate these challenges by selecting the right outsourcing partner, having a well-defined contract and clear communication, having a clear understanding of the requirements, and implementing effective project management practices.

Keywords: software outsourcing, vendor, outsourcing challenges, quality model, continent, country, global outsourcing, IT workforce outsourcing.

Procedia PDF Downloads 83
394 A Computerized Tool for Predicting Future Reading Abilities in Pre-Readers Children

Authors: Stephanie Ducrot, Marie Vernet, Eve Meiss, Yves Chaix

Abstract:

Learning to read is a key topic of debate today, both in terms of its implications on school failure and illiteracy and regarding what are the best teaching methods to develop. It is estimated today that four to six percent of school-age children suffer from specific developmental disorders that impair learning. The findings from people with dyslexia and typically developing readers suggest that the problems children experience in learning to read are related to the preliteracy skills that they bring with them from kindergarten. Most tools available to professionals are designed for the evaluation of child language problems. In comparison, there are very few tools for assessing the relations between visual skills and the process of learning to read. Recent literature reports that visual-motor skills and visual-spatial attention in preschoolers are important predictors of reading development — the main goal of this study aimed at improving screening for future reading difficulties in preschool children. We used a prospective, longitudinal approach where oculomotor processes (assessed with the DiagLECT test) were measured in pre-readers, and the impact of these skills on future reading development was explored. The dialect test specifically measures the online time taken to name numbers arranged irregularly in horizontal rows (horizontal time, HT), and the time taken to name numbers arranged in vertical columns (vertical time, VT). A total of 131 preschoolers took part in this study. At Time 0 (kindergarten), the mean VT, HT, errors were recorded. One year later, at Time 1, the reading level of the same children was evaluated. Firstly, this study allowed us to provide normative data for a standardized evaluation of the oculomotor skills in 5- and 6-year-old children. The data also revealed that 25% of our sample of preschoolers showed oculomotor impairments (without any clinical complaints). Finally, the results of this study assessed the validity of the DiagLECT test for predicting reading outcomes; the better a child's oculomotor skills are, the better his/her reading abilities will be.

Keywords: vision, attention, oculomotor processes, reading, preschoolers

Procedia PDF Downloads 140
393 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique

Authors: Satyasen Panda, Urmila Bhanja

Abstract:

In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.

Keywords: Cross Correlation (CC), Three dimensional Optical Code Division Multiple Access (3-D OCDMA), Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA), Multiple Access Interference (MAI), Phase Induced Intensity Noise (PIIN), Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code

Procedia PDF Downloads 407
392 Superamolecular Chemistry and Packing of FAMEs in the Liquid Phase for Optimization of Combustion and Emission

Authors: Zeev Wiesman, Paula Berman, Nitzan Meiri, Charles Linder

Abstract:

Supramolecular chemistry refers to the domain of chemistry beyond that of molecules and focuses on the chemical systems made up of a discrete number of assembled molecular sub units or components. Biodiesel components self arrangements is closely related/affect their physical properties in combustion systems and emission. Due to technological difficulties, knowledge regarding the molecular packing of FAMEs (biodiesel) in the liquid phase is limited. Spectral tools such as X-ray and NMR are known to provide evidences related to molecular structure organization. Recently, it was reported by our research group that using 1H Time Domain NMR methodology based on relaxation time and self diffusion coefficients, FAMEs clusters with different motilities can be accurately studied in the liquid phase. Head to head dimarization with quasi-smectic clusters organization, based on molecular motion analysis, was clearly demonstrated. These findings about the assembly/packing of the FAME components are directly associated with fluidity/viscosity of the biodiesel. Furthermore, these findings may provide information of micro/nano-particles that are formed in the delivery and injection system of various combustion systems (affected by thermodynamic conditions). Various relevant parameters to combustion such as: distillation/Liquid Gas phase transition, cetane number/ignition delay, shoot, oxidation/NOX emission maybe predicted. These data may open the window for further optimization of FAME/diesel mixture in terms of combustion and emission.

Keywords: supermolecular chemistry, FAMEs, liquid phase, fluidity, LF-NMR

Procedia PDF Downloads 334
391 Designing Electronic Kanban in Assembly Line Tailboom at XYZ Corp to Reducing Lead Time

Authors: Nadhifah A. Nugraha, Dida D. Damayanti, Widia Juliani

Abstract:

Airplanes manufacturing is growing along with the increasing demand from consumers. The helicopter's tail called Tailboom is a product of the helicopter division at XYZ Corp, where the Tailboom assembly line is a pull system. Based on observations of existing conditions that occur at XYZ Corp, production is still unable to meet the demands of consumers; lead time occurs greater than the plan agreed upon by the consumers. In the assembly process, each work station experiences a lack of parts and components needed to assemble components. This happens because of the delay in getting the required part information, and there is no warning about the availability of parts needed, it makes some parts unavailable in assembly warehouse. The lack of parts and components from the previous work station causes the assembly process to stop, and the assembly line also stops at the next station. In its completion, the production time was late and not on the schedule. In resolving these problems, the controlling process is needed, which is controlling the assembly line to get all components and subassembly in the right amount and at the right time. This study applies one of Just In Time tools, namely Kanban and automation, should be added as efficiently and effectively communication line becomes electronic Kanban. The problem can be solved by reducing non-value added time, such as waiting time and idle time. The proposed results of controlling the assembly line of Tailboom result in a smooth assembly line without waiting, reduced lead time, and achieving production time according to the schedule agreed with the consumers.

Keywords: kanban, e-Kanban, lead time, pull system

Procedia PDF Downloads 105
390 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2

Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk

Abstract:

Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.

Keywords: ecosystem services, grassland management, machine learning, remote sensing

Procedia PDF Downloads 211
389 Similar Correlation of Meat and Sugar to Global Obesity Prevalence

Authors: Wenpeng You, Maciej Henneberg

Abstract:

Background: Sugar consumption has been overwhelmingly advocated as a major dietary offender to obesity prevalence. Meat intake has been hypothesized as an obesity contributor in previous publications, but a moderate amount of meat to be included in our daily diet still has been suggested in many dietary guidelines. Comparable sugar and meat exposure data were obtained to assess the difference in relationships between the two major food groups and obesity prevalence at population level. Methods: Population level estimates of obesity and overweight rates, per capita per day exposure of major food groups (meat, sugar, starch crops, fibers, fats and fruits) and total calories, per capita per year GDP, urbanization and physical inactivity prevalence rate were extracted and matched for statistical analysis. Correlation coefficient (Pearson and partial) comparisons with Fisher’s r-to-z transformation and β range (β ± 2 SE) and overlapping in multiple linear regression (Enter and Stepwise) were used to examine potential differences in the relationships between obesity prevalence and sugar exposure and meat exposure respectively. Results: Pearson and partial correlations (controlled for total calories, physical inactivity prevalence, GDP and urbanization) analyses revealed that sugar and meat exposures correlated to obesity and overweight prevalence significantly. Fisher's r-to-z transformation did not show statistically significant difference in Pearson correlation coefficients (z=-0.53, p=0.5961) or partial correlation coefficients (z=-0.04, p=0.9681) between obesity prevalence and both sugar exposure and meat exposure. Both Enter and Stepwise models in multiple linear regression analysis showed that sugar and meat exposure were most significant predictors of obesity prevalence. Great β range overlapping in the Enter (0.289-0.573) and Stepwise (0.294-0.582) models indicated statistically sugar and meat exposure correlated to obesity without significant difference. Conclusion: Worldwide sugar and meat exposure correlated to obesity prevalence at the same extent. Like sugar, minimal meat exposure should also be suggested in the dietary guidelines.

Keywords: meat, sugar, obesity, energy surplus, meat protein, fats, insulin resistance

Procedia PDF Downloads 301
388 Effect of Nicotine on the Reinforcing Effects of Cocaine in a Nonhuman Primate Model of Drug Use

Authors: Mia I. Allen, Bernard N. Johnson, Gagan Deep, Yixin Su, Sangeeta Singth, Ashish Kumar, , Michael A. Nader

Abstract:

With no FDA-approved treatments for cocaine use disorders (CUD), research has focused on the behavioral and neuropharmacological effects of cocaine in animal models, with the goal of identifying novel interventions. While the majority of people with CUD also use tobacco/nicotine, the majority of preclinical cocaine research does not include the co-use of nicotine. The present study examined nicotine and cocaine co-use under several conditions of intravenous drug self-administration in monkeys. In Experiment 1, male rhesus monkeys (N=3) self-administered cocaine (0.001-0.1 mg/kg/injection) alone and cocaine+nicotine (0.01-0.03 mg/kg/injection) under a progressive-ratio schedule of reinforcement. When nicotine was added to cocaine, there was a significant leftward shift and significant increase in peak break point. In Experiment 2, socially housed female and male cynomolgus monkeys (N=14) self-administered cocaine under a concurrent drug-vs-food choice schedule. Combining nicotine significantly decreased cocaine choice ED50 values (i.e., shifted the cocaine dose-response curve to the left) in females but not in males. There was no evidence of social rank differences. In delay discounting studies, the co-use of nicotine and cocaine required significantly larger delays to the preferred drug reinforcer to reallocate choice compared with cocaine alone. Overall, these results suggest drug interactions of nicotine and cocaine co-use is not simply a function of potency but rather a fundamentally distinctive condition that should be utilized to better understand the neuropharmacology of CUD and the evaluation of potential treatments.

Keywords: polydrug use, animal models, nonhuman primates, behavioral pharmacology, drug self-administration

Procedia PDF Downloads 82
387 Using Non-Negative Matrix Factorization Based on Satellite Imagery for the Collection of Agricultural Statistics

Authors: Benyelles Zakaria, Yousfi Djaafar, Karoui Moussa Sofiane

Abstract:

Agriculture is fundamental and remains an important objective in the Algerian economy, based on traditional techniques and structures, it generally has a purpose of consumption. Collection of agricultural statistics in Algeria is done using traditional methods, which consists of investigating the use of land through survey and field survey. These statistics suffer from problems such as poor data quality, the long delay between collection of their last final availability and high cost compared to their limited use. The objective of this work is to develop a processing chain for a reliable inventory of agricultural land by trying to develop and implement a new method of extracting information. Indeed, this methodology allowed us to combine data from remote sensing and field data to collect statistics on areas of different land. The contribution of remote sensing in the improvement of agricultural statistics, in terms of area, has been studied in the wilaya of Sidi Bel Abbes. It is in this context that we applied a method for extracting information from satellite images. This method is called the non-negative matrix factorization, which does not consider the pixel as a single entity, but will look for components the pixel itself. The results obtained by the application of the MNF were compared with field data and the results obtained by the method of maximum likelihood. We have seen a rapprochement between the most important results of the FMN and those of field data. We believe that this method of extracting information from satellite data leads to interesting results of different types of land uses.

Keywords: blind source separation, hyper-spectral image, non-negative matrix factorization, remote sensing

Procedia PDF Downloads 416
386 Event Driven Dynamic Clustering and Data Aggregation in Wireless Sensor Network

Authors: Ashok V. Sutagundar, Sunilkumar S. Manvi

Abstract:

Energy, delay and bandwidth are the prime issues of wireless sensor network (WSN). Energy usage optimization and efficient bandwidth utilization are important issues in WSN. Event triggered data aggregation facilitates such optimal tasks for event affected area in WSN. Reliable delivery of the critical information to sink node is also a major challenge of WSN. To tackle these issues, we propose an event driven dynamic clustering and data aggregation scheme for WSN that enhances the life time of the network by minimizing redundant data transmission. The proposed scheme operates as follows: (1) Whenever the event is triggered, event triggered node selects the cluster head. (2) Cluster head gathers data from sensor nodes within the cluster. (3) Cluster head node identifies and classifies the events out of the collected data using Bayesian classifier. (4) Aggregation of data is done using statistical method. (5) Cluster head discovers the paths to the sink node using residual energy, path distance and bandwidth. (6) If the aggregated data is critical, cluster head sends the aggregated data over the multipath for reliable data communication. (7) Otherwise aggregated data is transmitted towards sink node over the single path which is having the more bandwidth and residual energy. The performance of the scheme is validated for various WSN scenarios to evaluate the effectiveness of the proposed approach in terms of aggregation time, cluster formation time and energy consumed for aggregation.

Keywords: wireless sensor network, dynamic clustering, data aggregation, wireless communication

Procedia PDF Downloads 442
385 Analysis of the Unmanned Aerial Vehicles’ Incidents and Accidents: The Role of Human Factors

Authors: Jacob J. Shila, Xiaoyu O. Wu

Abstract:

As the applications of unmanned aerial vehicles (UAV) continue to increase across the world, it is critical to understand the factors that contribute to incidents and accidents associated with these systems. Given the variety of daily applications that could utilize the operations of the UAV (e.g., medical, security operations, construction activities, landscape activities), the main discussion has been how to safely incorporate the UAV into the national airspace system. The types of UAV incidents being reported range from near sightings by other pilots to actual collisions with aircraft or UAV. These incidents have the potential to impact the rest of aviation operations in a variety of ways, including human lives, liability costs, and delay costs. One of the largest causes of these incidents cited is the human factor; other causes cited include maintenance, aircraft, and others. This work investigates the key human factors associated with UAV incidents. To that end, the data related to UAV incidents that have occurred in the United States is both reviewed and analyzed to identify key human factors related to UAV incidents. The data utilized in this work is gathered from the Federal Aviation Administration (FAA) drone database. This study adopts the human factor analysis and classification system (HFACS) to identify key human factors that have contributed to some of the UAV failures to date. The uniqueness of this work is the incorporation of UAV incident data from a variety of applications and not just military data. In addition, identifying the specific human factors is crucial towards developing safety operational models and human factor guidelines for the UAV. The findings of these common human factors are also compared to similar studies in other countries to determine whether these factors are common internationally.

Keywords: human factors, incidents and accidents, safety, UAS, UAV

Procedia PDF Downloads 232
384 The Effect of Two Methods of Upper and Lower Resistance Exercise Training on C-Reactive Protein, Interleukin-6 and Intracellular Adhesion Molecule-1 in Healthy Untrained Women

Authors: Leyla Sattarzadeh, Maghsoud Peeri, Mohammadali Azarbaijani, Hasan Matin Homaee

Abstract:

Inflammation by various mechanisms may cause atherosclerosis. Systemic circulating inflammatory markers such as C-reactive protein (CRP), pro-inflammatory cytokines such as Interleukin-6 (IL-6) and adhesion molecules like Intracellular Adhesion Molecule-1 (ICAM-1) are the predictors of cardiovascular diseases. Regarding the conflicting results about the effect of resistance exercise training on these inflammatory markers, the present study aimed to examine the effect of eight week different patterns of resistance exercise training on CRP, IL-6 and ICAM-1 levels in healthy untrained women. 40 volunteered and healthy untrained female university students (aged: 21+ 3 yr., Body Mass Index: 21.5+ 3.5 kg/m2) were selected purposefully and divided into three groups. At the end of training protocol and after subjects drop during the protocol in upper body exercise training (n=11), lower body (n=12) completed the eight week of training period although the control group (n=7) did anything. Blood samples gathered pre and post experimental period and CRP, IL-6 and ICAM-1 levels were evaluated using special laboratory kits, then the difference of pre and post values of each indices analyzed using one way Analysis of Variance (α < 0.05). The results of one way ANOVA for difference of pre and post values of CRP and ICAM-1 showed no significant changes due to the exercise training. But there were significant differences between groups about IL-6. Tukey post- hoc test indicated that there is significant difference between the differences of pre and post values of IL-6 between lower body exercise training group and control group, and eight weeks of lower body exercise training lead to significant changes in IL-6 values. There were no changes in anthropometric indices. The findings show that the different patterns of upper and lower body exercise training by involving the different amount of muscles altered the IL-6 values in lower body exercise training group probably because of engaging the bigger amount of muscles, but showed any significant changes about CRP and ICAM-1 probably due to intensity and duration of exercise or the lower levels of these markers at baseline of healthy people.

Keywords: C-reactive protein, interleukin-6, intracellular adhesion molecule-1, resistance training

Procedia PDF Downloads 245
383 Shallow Water Lidar System in Measuring Erosion Rate of Coarse-Grained Materials

Authors: Ghada S. Ellithy, John. W. Murphy, Maureen K. Corcoran

Abstract:

Erosion rate of soils during a levee or dam overtopping event is a major component in risk assessment evaluation of breach time and downstream consequences. The mechanism and evolution of dam or levee breach caused by overtopping erosion is a complicated process and difficult to measure during overflow due to accessibility and quickly changing conditions. In this paper, the results of a flume erosion tests are presented and discussed. The tests are conducted on a coarse-grained material with a median grain size D50 of 5 mm in a 1-m (3-ft) wide flume under varying flow rates. Each test is performed by compacting the soil mix r to its near optimum moisture and dry density as determined from standard Proctor test in a box embedded in the flume floor. The box measures 0.45 m wide x 1.2 m long x 0.25 m deep. The material is tested several times at varying hydraulic loading to determine the erosion rate after equal time intervals. The water depth, velocity are measured at each hydraulic loading, and the acting bed shear is calculated. A shallow water lidar (SWL) system was utilized to record the progress of soil erodibility and water depth along the scanned profiles of the tested box. SWL is a non-contact system that transmits laser pulses from above the water and records the time-delay between top and bottom reflections. Results from the SWL scans are compared with before and after manual measurements to determine the erosion rate of the soil mix and other erosion parameters.

Keywords: coarse-grained materials, erosion rate, LIDAR system, soil erosion

Procedia PDF Downloads 106
382 InSAR Times-Series Phase Unwrapping for Urban Areas

Authors: Hui Luo, Zhenhong Li, Zhen Dong

Abstract:

The analysis of multi-temporal InSAR (MTInSAR) such as persistent scatterer (PS) and small baseline subset (SBAS) techniques usually relies on temporal/spatial phase unwrapping (PU). Unfortunately, it always fails to unwrap the phase for two reasons: 1) spatial phase jump between adjacent pixels larger than π, such as layover and high discontinuous terrain; 2) temporal phase discontinuities such as time varied atmospheric delay. To overcome these limitations, a least-square based PU method is introduced in this paper, which incorporates baseline-combination interferograms and adjacent phase gradient network. Firstly, permanent scatterers (PS) are selected for study. Starting with the linear baseline-combination method, we obtain equivalent 'small baseline inteferograms' to limit the spatial phase difference. Then, phase different has been conducted between connected PSs (connected by a specific networking rule) to suppress the spatial correlated phase errors such as atmospheric artifact. After that, interval phase difference along arcs can be computed by least square method and followed by an outlier detector to remove the arcs with phase ambiguities. Then, the unwrapped phase can be obtained by spatial integration. The proposed method is tested on real data of TerraSAR-X, and the results are also compared with the ones obtained by StaMPS(a software package with 3D PU capabilities). By comparison, it shows that the proposed method can successfully unwrap the interferograms in urban areas even when high discontinuities exist, while StaMPS fails. At last, precise DEM errors can be got according to the unwrapped interferograms.

Keywords: phase unwrapping, time series, InSAR, urban areas

Procedia PDF Downloads 144
381 A Retrospective Study on the Age of Onset for Type 2 Diabetes Diagnosis

Authors: Mohamed A. Hammad, Dzul Azri Mohamed Noor, Syed Azhar Syed Sulaiman, Majed Ahmed Al-Mansoub, Muhammad Qamar

Abstract:

There is a progressive increase in the prevalence of early onset Type 2 diabetes mellitus. Early detection of Type 2 diabetes enhances the length and/or quality of life which might result from a reduction in the severity, frequency or prevent or delay of its long-term complications. The study aims to determine the onset age for the first diagnosis of Type 2 diabetes mellitus. A retrospective study conducted in the endocrine clinic at Hospital Pulau Pinang in Penang, Malaysia, January- December 2016. Records of 519 patients with Type 2 diabetes mellitus were screened to collect demographic data and determine the age of first-time diabetes mellitus diagnosis. Patients classified according to the age of diagnosis, gender, and ethnicity. The study included 519 patients with age (55.6±13.7) years, female 265 (51.1%) and male 254 (48.9%). The ethnicity distribution was Malay 191 (36.8%), Chinese 189 (36.4%) and Indian 139 (26.8%). The age of Type 2 diabetes diagnosis was (42±14.8) years. The female onset of diabetes mellitus was at age (41.5±13.7) years, while male (42.6±13.7) years. Distribution of diabetic onset by ethnicity was Malay at age (40.7±13.7) years, Chinese (43.2±13.7) years and Indian (42.3±13.7) years. Diabetic onset was classified by age as follow; ≤20 years’ cohort was 33 (6.4%) cases. Group >20- ≤40 years was 190 (36.6%) patients, and category >40- ≤60 years was 270 (52%) subjects. On the other hand, the group >60 years was 22 (4.2%) patients. The range of diagnosis was between 10 and 73 years old. Conclusion: Malay and female have an earlier onset of diabetes than Indian, Chinese and male. More than half of the patients had diabetes between 40 and 60 years old. Diabetes mellitus is becoming more common in younger age <40 years. The age at diagnosis of Type 2 diabetes mellitus has decreased with time.

Keywords: age of onset, diabetes diagnosis, diabetes mellitus, Malaysia, outpatients, type 2 diabetes, retrospective study

Procedia PDF Downloads 406
380 Comparative Study of Tensile Properties of Cast and Hot Forged Alumina Nanoparticle Reinforced Composites

Authors: S. Ghanaraja, Subrata Ray, S. K. Nath

Abstract:

Particle reinforced Metal Matrix Composite (MMC) succeeds in synergizing the metallic matrix with ceramic particle reinforcements to result in improved strength, particularly at elevated temperatures, but adversely it affects the ductility of the matrix because of agglomeration and porosity. The present study investigates the outcome of tensile properties in a cast and hot forged composite reinforced simultaneously with coarse and fine particles. Nano-sized alumina particles have been generated by milling mixture of aluminum and manganese dioxide powders. Milled particles after drying are added to molten metal and the resulting slurry is cast. The microstructure of the composites shows good distribution of both the size categories of particles without significant clustering. The presence of nanoparticles along with coarser particles in a composite improves both strength and ductility considerably. Delay in debonding of coarser particles to higher stress is due to reduced mismatch in extension caused by increased strain hardening in presence of the nanoparticles. However, higher addition of powder mix beyond a limit results in deterioration of mechanical properties, possibly due to clustering of nanoparticles. The porosity in cast composite generally increases with the increasing addition of powder mix as observed during process and on forging it has got reduced. The base alloy and nanocomposites show improvement in flow stress which could be attributed to lowering of porosity and grain refinement as a consequence of forging.

Keywords: aluminium, alumina, nano-particle reinforced composites, porosity

Procedia PDF Downloads 242
379 Antihyperglycemic Potential of Chrysin and Diosmin alone or in Combination against Streptozotocin-Induced Hyperglycemia in Rats: Anti-Inflammatory and Antioxidant Mechanisms

Authors: Sally A. El Awdan, Gehad A. Abdel Jaleel, Dalia O Saleh, Manal Badawi

Abstract:

Background: Diabetes is a metabolic disease that affects a wide range of people worldwide and results in serious complications. Streptozotocin (STZ) causes selective cytotoxicity in the pancreatic β-cell, and it has been extensively used to induce diabetes mellitus in rats. The present study investigated the effects of diosmin and chrysin alone or in combination with each other on glucose level and on liver in STZ diabetic rats. Methods: In this study, rats were divided into six experimental groups (normal, untreated STZ-diabetic (60 mg/kg B.W., IP), treated STZ-diabetic with glycazide (10 mg/kg B.W, oral), treated STZ-diabetic with diosmin (100 mg/kg B. W., oral), treated STZ-diabetic with chrysin (80 mg/kg B.W., oral), treated STZ-diabetic with diosmin (50 mg/kg B.W, oral) + chrysin (40 mg/kg B.W., oral). After 2 weeks blood samples were withdrawn and glucose was measured. Animals were anaesthetized with an intraperitoneal injection of sodium pentobarbital (60 mg/kg), and sacrificed for dissecting liver. Results: Throughout the experimental period, all treatments significantly (P<0.05) lowered serum glucose, AST, ALT, triglyceride, cholesterol, IL-6, TNF-α and IL-1β. Moreover, the treated diabetic rats showed higher levels of reduced glutathione (P<0.05) in the liver compared to the diabetic control rats and inhibited diabetes-induced elevation in the levels of malondialdehyde in liver. The results of this study clearly demonstrated that diosmin and chrysin possess several treatment-oriented properties, including the control of hyperglycemia, antioxidant effects and anti-inflammatory effects. Conclusion: Considering these observations, it appears that diosmin and chrysin may be a useful supplement to delay the developmentof diabetes and its complications.

Keywords: diabetes, streptozocin, chrysin, rat, diosmin, cytokines

Procedia PDF Downloads 258
378 Study of Causes and Effects of Road Projects Abandonment in Nigeria

Authors: Monsuru Oyenola Popoola, Oladapo Samson Abiola, Wusamotu Alao Adeniji

Abstract:

The prevalent and incessant abandonment of road construction projects are alarming that it creates several negative effects to social, economic and environmental values of the project. The purpose of this paper is to investigate and determined the various causes and effects of abandoning road construction projects in Nigeria. Likert Scale questionnaire design was used to administered and analysed the data obtained for the stydy. 135 (Nr) questionnaires were completed and retrieved from the respondents, out of 200 (Nr) questionnaires sent out, representing a response rate of 67.5%. The analysis utilized the Relative Importance Index (R.I.I.) method and the results are presented in tabular form. The findings confirms that at least 20 factors were the causes of road projects abandonment in Nigeria with most including Leadership Instability, Improper Project Planning, Inconsistence in government policies and Design, Contractor Incompetence, Economy Instability and Inflation, Delay in remittance of money, Improper financial analysis, Poor risk management, Climatic Conditions, Improper Project Estimates etc. The findings also show that at least eight (8) effect were identified on the system, and these include; Waste of Financial Resources, Loss of economic value, Environmental degradation, Loss of economic value, Reduction in standard of living, Litigation and Arbitration, etc. The reflection is that allocating reasonable finance, developing appropriate and effective implementation plans and monitoring, evaluation and reporting on development project activities by key actors should enhance in resolving the problem of road projects abandonment.

Keywords: road construction, abandonment of road projects, climatic condition, project planning, contractor

Procedia PDF Downloads 293
377 Heuristics for Optimizing Power Consumption in the Smart Grid

Authors: Zaid Jamal Saeed Almahmoud

Abstract:

Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.

Keywords: heuristics, optimization, smart grid, peak demand, power supply

Procedia PDF Downloads 82
376 Anti-Corruption, an Important Challenge for the Construction Industry!

Authors: Ahmed Stifi, Sascha Gentes, Fritz Gehbauer

Abstract:

The construction industry is perhaps one of the oldest industry of the world. The ancient monuments like the egyptian pyramids, the temples of Greeks and Romans like Parthenon and Pantheon, the robust bridges, old Roman theatres, the citadels and many more are the best testament to that. The industry also has a symbiotic relationship with other . Some of the heavy engineering industry provide construction machineries, chemical industry develop innovative construction materials, finance sector provides fund solutions for complex construction projects and many more. Construction Industry is not only mammoth but also very complex in nature. Because of the complexity, construction industry is prone to various tribulations which may have the propensity to hamper its growth. The comparitive study of this industry with other depicts that it is associated with a state of tardiness and delay especially when we focus on the managerial aspects and the study of triple constraint (time, cost and scope). While some institutes says the complexity associated with it as a major reason, others like lean construction, refers to the wastes produced across the construction process as the prime reason. This paper introduces corruption as one of the prime factors for such delays.To support this many international reports and studies are available depicting that construction industry is one of the most corrupt sectors worldwide, and the corruption can take place throught the project cycle comprising project selection, planning, design, funding, pre-qualification, tendering, execution, operation and maintenance, and even through the reconstrction phase. It also happens in many forms such as bribe, fraud, extortion, collusion, embezzlement and conflict of interest and the self-sufficient. As a solution to cope the corruption in construction industry, the paper introduces the integrity as a key factor and build a new integrity framework to develop and implement an integrity management system for construction companies and construction projects.

Keywords: corruption, construction industry, integrity, lean construction

Procedia PDF Downloads 371
375 Risk of Type 2 Diabetes among Female College Students in Saudi Arabia

Authors: Noor A. Hakim

Abstract:

Several studies in the developed countries investigated the prevalence of diabetes and obesity among individuals from different socioeconomic levels and suggested lower rates among the higher socioeconomic groups. However, studies evaluating diabetes risk and prevalence of obesity among the population of middle- to high-income status in developing countries are limited. The aim of this study is to evaluate the risk of developing type-2 diabetes mellitus (T2DM) and the weight status of female students in private universities in Jeddah City, Saudi Arabia. This is a cross-sectional study of 121 female students aged ≤ 25 years old was conducted; participants were recruited from two private universities. Diabetes risk was evaluated using the Finnish Diabetes Risk Score. Anthropometric measurements were assessed, and body-mass-index (BMI) was calculated. Diabetes risk scores indicated that 35.5% of the female students had a slightly elevated risk, and 10.8% had a moderate to high risk to develop T2DM. One-third of the females (29.7%) were overweight or obese. The majority of the normal weight and underweight groups were classified to have a low risk of diabetes, 22.2% of the overweight participants were classified to have moderate to high risk, and over half of the obese participants (55.5%) were classified to be at the moderate to high-risk category. Conclusions: Given that diabetes risk is alarming among the population in Saudi Arabia, healthcare providers should utilize a simple screening tool to identify high-risk individuals and initiate diabetes preventive strategies to prevent, or delay, the onset of T2DM and improve the quality of life.

Keywords: risk of type 2 diabetes, weight status, college students, socioeconomic status

Procedia PDF Downloads 172
374 The Psychometric Properties of an Instrument to Estimate Performance in Ball Tasks Objectively

Authors: Kougioumtzis Konstantin, Rylander Pär, Karlsteen Magnus

Abstract:

Ball skills as a subset of fundamental motor skills are predictors for performance in sports. Currently, most tools evaluate ball skills utilizing subjective ratings. The aim of this study was to examine the psychometric properties of a newly developed instrument to objectively measure ball handling skills (BHS-test) utilizing digital instrument. Participants were a convenience sample of 213 adolescents (age M = 17.1 years, SD =3.6; 55% females, 45% males) recruited from upper secondary schools and invited to a sports hall for the assessment. The 8-item instrument incorporated both accuracy-based ball skill tests and repetitive-performance tests with a ball. Testers counted performance manually in the four tests (one throwing and three juggling tasks). Furthermore, assessment was technologically enhanced in the other four tests utilizing a ball machine, a Kinect camera and balls with motion sensors (one balancing and three rolling tasks). 3D printing technology was used to construct equipment, while all results were administered digitally with smart phones/tablets, computers and a specially constructed application to send data to a server. The instrument was deemed reliable (α = .77) and principal component analysis was used in a random subset (53 of the participants). Furthermore, latent variable modeling was employed to confirm the structure with the remaining subset (160 of the participants). The analysis showed good factorial-related validity with one factor explaining 57.90 % of the total variance. Four loadings were larger than .80, two more exceeded .76 and the other two were .65 and .49. The one factor solution was confirmed by a first order model with one general factor and an excellent fit between model and data (χ² = 16.12, DF = 20; RMSEA = .00, CI90 .00–.05; CFI = 1.00; SRMR = .02). The loadings on the general factor ranged between .65 and .83. Our findings indicate good reliability and construct validity for the BHS-test. To develop the instrument further, more studies are needed with various age-groups, e.g. children. We suggest using the BHS-test for diagnostic or assessment purpose for talent development and sports participation interventions that focus on ball games.

Keywords: ball-handling skills, ball-handling ability, technologically-enhanced measurements, assessment

Procedia PDF Downloads 85
373 Critical Success Factors Quality Requirement Change Management

Authors: Jamshed Ahmad, Abdul Wahid Khan, Javed Ali Khan

Abstract:

Managing software quality requirements change management is a difficult task in the field of software engineering. Avoiding incoming changes result in user dissatisfaction while accommodating to many requirement changes may delay product delivery. Poor requirements management is solely considered the primary cause of the software failure. It becomes more challenging in global software outsourcing. Addressing success factors in quality requirement change management is desired today due to the frequent change requests from the end-users. In this research study, success factors are recognized and scrutinized with the help of a systematic literature review (SLR). In total, 16 success factors were identified, which significantly impacted software quality requirement change management. The findings show that Proper Requirement Change Management, Rapid Delivery, Quality Software Product, Access to Market, Project Management, Skills and Methodologies, Low Cost/Effort Estimation, Clear Plan and Road Map, Agile Processes, Low Labor Cost, User Satisfaction, Communication/Close Coordination, Proper Scheduling and Time Constraints, Frequent Technological Changes, Robust Model, Geographical distribution/Cultural differences are the key factors that influence software quality requirement change. The recognized success factors and validated with the help of various research methods, i.e., case studies, interviews, surveys and experiments. These factors are then scrutinized in continents, database, company size and period of time. Based on these findings, requirement change will be implemented in a better way.

Keywords: global software development, requirement engineering, systematic literature review, success factors

Procedia PDF Downloads 192
372 Improving Fault Tolerance and Load Balancing in Heterogeneous Grid Computing Using Fractal Transform

Authors: Saad M. Darwish, Adel A. El-Zoghabi, Moustafa F. Ashry

Abstract:

The popularity of the Internet and the availability of powerful computers and high-speed networks as low-cost commodity components are changing the way we use computers today. These technical opportunities have led to the possibility of using geographically distributed and multi-owner resources to solve large-scale problems in science, engineering, and commerce. Recent research on these topics has led to the emergence of a new paradigm known as Grid computing. To achieve the promising potentials of tremendous distributed resources, effective and efficient load balancing algorithms are fundamentally important. Unfortunately, load balancing algorithms in traditional parallel and distributed systems, which usually run on homogeneous and dedicated resources, cannot work well in the new circumstances. In this paper, the concept of a fast fractal transform in heterogeneous grid computing based on R-tree and the domain-range entropy is proposed to improve fault tolerance and load balancing algorithm by improve connectivity, communication delay, network bandwidth, resource availability, and resource unpredictability. A novel two-dimension figure of merit is suggested to describe the network effects on load balance and fault tolerance estimation. Fault tolerance is enhanced by adaptively decrease replication time and message cost while load balance is enhanced by adaptively decrease mean job response time. Experimental results show that the proposed method yields superior performance over other methods.

Keywords: Grid computing, load balancing, fault tolerance, R-tree, heterogeneous systems

Procedia PDF Downloads 479
371 Performance Comparison of Microcontroller-Based Optimum Controller for Fruit Drying System

Authors: Umar Salisu

Abstract:

This research presents the development of a hot air tomatoes drying system. To provide a more efficient and continuous temperature control, microcontroller-based optimal controller was developed. The system is based on a power control principle to achieve smooth power variations depending on a feedback temperature signal of the process. An LM35 temperature sensor and LM399 differential comparator were used to measure the temperature. The mathematical model of the system was developed and the optimal controller was designed and simulated and compared with the PID controller transient response. A controlled environment suitable for fruit drying is developed within a closed chamber and is a three step process. First, the infrared light is used internally to preheated the fruit to speedily remove the water content inside the fruit for fast drying. Second, hot air of a specified temperature is blown inside the chamber to maintain the humidity below a specified level and exhaust the humid air of the chamber. Third, the microcontroller disconnects the power to the chamber after the moisture content of the fruits is removed to minimal. Experiments were conducted with 1kg of fresh tomatoes at three different temperatures (40, 50 and 60 °C) at constant relative humidity of 30%RH. The results obtained indicate that the system is significantly reducing the drying time without affecting the quality of the fruits. In the context of temperature control, the results obtained showed that the response of the optimal controller has zero overshoot whereas the PID controller response overshoots to about 30% of the set-point. Another performance metric used is the rising time; the optimal controller rose without any delay while the PID controller delayed for more than 50s. It can be argued that the optimal controller performance is preferable than that of the PID controller since it does not overshoot and it starts in good time.

Keywords: drying, microcontroller, optimum controller, PID controller

Procedia PDF Downloads 290
370 A West Coast Estuarine Case Study: A Predictive Approach to Monitor Estuarine Eutrophication

Authors: Vedant Janapaty

Abstract:

Estuaries are wetlands where fresh water from streams mixes with salt water from the sea. Also known as “kidneys of our planet”- they are extremely productive environments that filter pollutants, absorb floods from sea level rise, and shelter a unique ecosystem. However, eutrophication and loss of native species are ailing our wetlands. There is a lack of uniform data collection and sparse research on correlations between satellite data and in situ measurements. Remote sensing (RS) has shown great promise in environmental monitoring. This project attempts to use satellite data and correlate metrics with in situ observations collected at five estuaries. Images for satellite data were processed to calculate 7 bands (SIs) using Python. Average SI values were calculated per month for 23 years. Publicly available data from 6 sites at ELK was used to obtain 10 parameters (OPs). Average OP values were calculated per month for 23 years. Linear correlations between the 7 SIs and 10 OPs were made and found to be inadequate (correlation = 1 to 64%). Fourier transform analysis on 7 SIs was performed. Dominant frequencies and amplitudes were extracted for 7 SIs, and a machine learning(ML) model was trained, validated, and tested for 10 OPs. Better correlations were observed between SIs and OPs, with certain time delays (0, 3, 4, 6 month delay), and ML was again performed. The OPs saw improved R² values in the range of 0.2 to 0.93. This approach can be used to get periodic analyses of overall wetland health with satellite indices. It proves that remote sensing can be used to develop correlations with critical parameters that measure eutrophication in situ data and can be used by practitioners to easily monitor wetland health.

Keywords: estuary, remote sensing, machine learning, Fourier transform

Procedia PDF Downloads 94
369 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 119
368 Predicting Wearable Technology Readiness in a South African Government Department: Exploring the Influence of Wearable Technology Acceptance and Positive Attitude

Authors: Henda J Thomas, Cornelia PJ Harmse, Cecile Schultz

Abstract:

Wearables are one of the technologies that will flourish within the fourth industrial revolution and digital transformation arenas, allowing employers to integrate collected data into organisational information systems. The study aimed to investigate whether wearable technology readiness can predict employees’ acceptance to wear wearables in the workplace. The factors of technology readiness predisposition that predict acceptance and positive attitudes towards wearable use in the workplace were examined. A quantitative research approach was used. The population consisted of 8 081 South African Department of Employment and Labour employees (DEL). Census sampling was used, and questionnaires to collect data were sent electronically to all 8 081 employees, 351 questionnaires were received back. The measuring instrument called the Technology Readiness and Acceptance Model (TRAM) was used in this study. Four hypotheses were formulated to investigate the relationship between readiness and acceptance of wearables in the workplace. The results found consistent predictions of technology acceptance (TA) by eagerness, optimism, and discomfort in the technology readiness (TR) scales. The TR scales of optimism and eagerness were consistent positive predictors of the TA scales, while discomfort proved to be a negative predictor for two of the three TA scales. Insecurity was found not to be a predictor of TA. It was recommended that the digital transformation policy of the DEL should be revised. Wearables in the workplace should be embraced from the viewpoint of convenience, automation, and seamless integration with the DEL information systems. The empirical contribution of this study can be seen in the fact that positive attitude emerged as a factor that extends the TRAM. In this study, positive attitude is identified as a new dimension to the TRAM not found in the original TA model and subsequent studies of the TRAM. Furthermore, this study found that Perceived Usefulness (PU) and Behavioural Intention to Use and (BIU) could not be separated but formed one factor. The methodological contribution of this study can lead to the development of a Wearable Readiness and Acceptance Model (WRAM). To the best of our knowledge, no author has yet introduced the WRAM into the body of knowledge.

Keywords: technology acceptance model, technology readiness index, technology readiness and acceptance model, wearable devices, wearable technology, fourth industrial revolution

Procedia PDF Downloads 79
367 Neuroimaging Markers for Screening Former NFL Players at Risk for Developing Alzheimer's Disease / Dementia Later in Life

Authors: Vijaykumar M. Baragi, Ramtilak Gattu, Gabriela Trifan, John L. Woodard, K. Meyers, Tim S. Halstead, Eric Hipple, Ewart Mark Haacke, Randall R. Benson

Abstract:

NFL players, by virtue of their exposure to repetitive head injury, are at least twice as likely to develop Alzheimer's disease (AD) and dementia as the general population. Early recognition and intervention prior to onset of clinical symptoms could potentially avert/delay the long-term consequences of these diseases. Since AD is thought to have a long preclinical incubation period, the aim of the current research was to determine whether former NFL players, referred to a depression center, showed evidence of incipient dementia in their structural imaging prior to diagnosis of dementia. Thus, to identify neuroimaging markers of AD, against which former NFL players would be compared, we conducted a comprehensive volumetric analysis using a cohort of early stage AD patients (ADNI) to produce a set of brain regions demonstrating sensitivity to early AD pathology (i.e., the “AD fingerprint”). A cohort of 46 former NFL players’ brain MRIs were then interrogated using the AD fingerprint. Brain scans were done using a T1-weighted MPRAGE sequence. The Free Surfer image analysis suite (version 6.0) was used to obtain the volumetric and cortical thickness data. A total of 55 brain regions demonstrated significant atrophy or ex vacuo dilatation bilaterally in AD patients vs. healthy controls. Of the 46 former NFL players, 19 (41%) demonstrated a greater than expected number of atrophied/dilated AD regions when compared with age-matched controls, presumably reflecting AD pathology.

Keywords: alzheimers, neuroimaging biomarkers, traumatic brain injury, free surfer, ADNI

Procedia PDF Downloads 149
366 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique

Authors: S. Jalaja, A. M. Vijaya Prakash

Abstract:

Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.

Keywords: carry save adder Karatsuba multiplication, mid range Karatsuba multiplication, modified FFA and transposed filter, retiming

Procedia PDF Downloads 227