Search results for: Complexity index
411 Stratigraghy and Identifying Boundaries of Mozduran Formation with Magnetite Method in East Kopet-Dagh Basin
Authors: Z. Kadivar, M. Vahidinia, A. Mousavinia
Abstract:
Kopet-Dagh Mountain Range is located in the north and northeast of Iran. Mozduran Formation in the east of Kopet-Dagh is mainly composed of limestone, dolomite, with shale and sandstone interbedded. Mozduran Formation is reservoir rock of the Khangiran gas field. The location of the study was east Kopet-Dagh basin (Northeast Iran) where the deliberate thickness of formation is 418 meters. In the present study, a total of 57 samples were gathered. Moreover, 100 thin sections were made out of 52 samples. According to the findings of the thin section study, 18 genera and nine species of foraminifera and algae were identified. Based on the index fossils, the age of the Mozduran Formation was identified as Upper Jurassic (Kimmerdgian-Tithonian) in the east of Kopet-Dagh basin. According to the magnetite data (total intensity and RTP map), there is a disconformity (low intensity) between the Kashaf-Rood Formation and Mozduran Formation. At the top, where among Mozduran Formation and Shurijeh Formation, is high intensity and a widespread disconformity (high intensity).
Keywords: Upper Jurassic, magnetometer, Mozduran formation, stratigraphy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1074410 Relationship of Sleep Duration with Obesity and Dietary Intake
Authors: Seyed Ahmad Hosseini, Makan Cheraghpour, Saeed Shirali, Roya Rafie, Matin Ghanavati, Arezoo Amjadi, Meysam Alipour
Abstract:
Background: There is a mutual relationship between sleep duration and obesity. We studied the relationship between sleep duration with obesity and dietary Intake. Methods: This cross-sectional study was conducted on 444 male students in Ahvaz Jundishapur University of Medical Science. Dietary intake was analyzed by food frequency questionnaire (FFQ). Anthropometric indices were analyzed. Participants were being asked about their sleep duration and they were categorized into three groups according to their responses (less than six hours, between six and eight hours, and more than eight hours). Results: Macronutrient, micronutrient, and antioxidant intake did not show significant difference between three groups. Moreover, we did not observe any significant difference between anthropometric indices (weight, body mass index, waist circumference, and percentage body fat). Conclusions: Our study results show no significant relationship between sleep duration, nutrition pattern, and obesity. Further study is recommended.
Keywords: Sleep duration, obesity, dietary intake, cross-sectional.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1269409 Dynamic Threshold Adjustment Approach For Neural Networks
Authors: Hamza A. Ali, Waleed A. J. Rasheed
Abstract:
The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.
Keywords: Classification, Recognition, Neural Networks, Pattern Recognition, Generalization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632408 Mining Association Rules from Unstructured Documents
Authors: Hany Mahgoub
Abstract:
This paper presents a system for discovering association rules from collections of unstructured documents called EART (Extract Association Rules from Text). The EART system treats texts only not images or figures. EART discovers association rules amongst keywords labeling the collection of textual documents. The main characteristic of EART is that the system integrates XML technology (to transform unstructured documents into structured documents) with Information Retrieval scheme (TF-IDF) and Data Mining technique for association rules extraction. EART depends on word feature to extract association rules. It consists of four phases: structure phase, index phase, text mining phase and visualization phase. Our work depends on the analysis of the keywords in the extracted association rules through the co-occurrence of the keywords in one sentence in the original text and the existing of the keywords in one sentence without co-occurrence. Experiments applied on a collection of scientific documents selected from MEDLINE that are related to the outbreak of H5N1 avian influenza virus.Keywords: Association rules, information retrieval, knowledgediscovery in text, text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2446407 Heteromolecular Structure Formation in Aqueous Solutions of Ethanol, Tetrahydrofuran and Dimethylformamide
Authors: Sh. Gofurov, O. Ismailova, U. Makhmanov, A. Kokhkharov
Abstract:
The refractometric method has been used to determine optical properties of concentration features of aqueous solutions of ethanol, tetrahydrofuran and dimethylformamide at the room temperature. Changes in dielectric permittivity of aqueous solutions of ethanol, tetrahydrofuran and dimethylformamide in a wide range of concentrations (0÷1.0 molar fraction) have been studied using molecular dynamics method. The curves depending on the concentration of experimental data on excess refractive indices and excess dielectric permittivity were compared. It has been shown that stable heteromolecular complexes in binary solutions are formed in the concentration range of 0.3÷0.4 mole fractions. The real and complex part of dielectric permittivity was obtained from dipole-dipole autocorrelation functions of molecules. At the concentrations of C = 0.3 / 0.4 m.f. the heteromolecular structures with hydrogen bonds are formed. This is confirmed by the extremum values of excessive dielectric permittivity and excessive refractive index of aqueous solutions.
Keywords: Refractometric method, dielectric constant, molecular dynamics, aqueous solution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1008406 The Influence of Beta Shape Parameters in Project Planning
Authors: Αlexios Kotsakis, Stefanos Katsavounis, Dimitra Alexiou
Abstract:
Networks can be utilized to represent project planning problems, using nodes for activities and arcs to indicate precedence relationship between them. For fixed activity duration, a simple algorithm calculates the amount of time required to complete a project, followed by the activities that comprise the critical path. Program Evaluation and Review Technique (PERT) generalizes the above model by incorporating uncertainty, allowing activity durations to be random variables, producing nevertheless a relatively crude solution in planning problems. In this paper, based on the findings of the relevant literature, which strongly suggests that a Beta distribution can be employed to model earthmoving activities, we utilize Monte Carlo simulation, to estimate the project completion time distribution and measure the influence of skewness, an element inherent in activities of modern technical projects. We also extract the activity criticality index, with an ultimate goal to produce more accurate planning estimations.
Keywords: Beta distribution, PERT, Monte Carlo Simulation, skewness, project completion time distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 772405 A Comparative Analysis Approach Based on Fuzzy AHP, TOPSIS and PROMETHEE for the Selection Problem of GSCM Solutions
Authors: Omar Boutkhoum, Mohamed Hanine, Abdessadek Bendarag
Abstract:
Sustainable economic growth is nowadays driving firms to extend toward the adoption of many green supply chain management (GSCM) solutions. However, the evaluation and selection of these solutions is a matter of concern that needs very serious decisions, involving complexity owing to the presence of various associated factors. To resolve this problem, a comparative analysis approach based on multi-criteria decision-making methods is proposed for adequate evaluation of sustainable supply chain management solutions. In the present paper, we propose an integrated decision-making model based on FAHP (Fuzzy Analytic Hierarchy Process), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) and PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations) to contribute to a better understanding and development of new sustainable strategies for industrial organizations. Due to the varied importance of the selected criteria, FAHP is used to identify the evaluation criteria and assign the importance weights for each criterion, while TOPSIS and PROMETHEE methods employ these weighted criteria as inputs to evaluate and rank the alternatives. The main objective is to provide a comparative analysis based on TOPSIS and PROMETHEE processes to help make sound and reasoned decisions related to the selection problem of GSCM solution.Keywords: GSCM solutions, multi-criteria analysis, FAHP, TOPSIS, PROMETHEE, decision support system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 943404 An Evaluation Method of Accelerated Storage Life Test for Typical Mechanical and Electronic Products
Authors: Jinyong Yao, Hongzhi Li, Chao Du, Jiao Li
Abstract:
Reliability of long-term storage products is related to the availability of the whole system, and the evaluation of storage life is of great necessity. These products are usually highly reliable and little failure information can be collected. In this paper, an analytical method based on data from accelerated storage life test is proposed to evaluate the reliability index of the long-term storage products. Firstly, singularities are eliminated by data normalization and residual analysis. Secondly, with the preprocessed data, the degradation path model is built to obtain the pseudo life values. Then by life distribution hypothesis, we can get the estimator of parameters in high stress levels and verify failure mechanism consistency. Finally, the life distribution under the normal stress level is extrapolated via the acceleration model and evaluation of the actual average life is available. An application example with the camera stabilization device is provided to illustrate the methodology we proposed.
Keywords: Accelerated storage life test, failure mechanism consistency, life distribution, reliability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2294403 Application of Feed Forward Neural Networks in Modeling and Control of a Fed-Batch Crystallization Process
Authors: Petia Georgieva, Sebastião Feyo de Azevedo
Abstract:
This paper is focused on issues of nonlinear dynamic process modeling and model-based predictive control of a fed-batch sugar crystallization process applying the concept of artificial neural networks as computational tools. The control objective is to force the operation into following optimal supersaturation trajectory. It is achieved by manipulating the feed flow rate of sugar liquor/syrup, considered as the control input. A feed forward neural network (FFNN) model of the process is first built as part of the controller structure to predict the process response over a specified (prediction) horizon. The predictions are supplied to an optimization procedure to determine the values of the control action over a specified (control) horizon that minimizes a predefined performance index. The control task is rather challenging due to the strong nonlinearity of the process dynamics and variations in the crystallization kinetics. However, the simulation results demonstrated smooth behavior of the control actions and satisfactory reference tracking.
Keywords: Feed forward neural network, process modelling, model predictive control, crystallization process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1878402 Systems Engineering Management Using Transdisciplinary Quality System Development Lifecycle Model
Authors: Mohamed Asaad Abdelrazek, Amir Taher El-Sheikh, M. Zayan, A.M. Elhady
Abstract:
The successful realization of complex systems is dependent not only on the technology issues and the process for implementing them, but on the management issues as well. Managing the systems development lifecycle requires technical management. Systems engineering management is the technical management. Systems engineering management is accomplished by incorporating many activities. The three major activities are development phasing, systems engineering process and lifecycle integration. Systems engineering management activities are performed across the system development lifecycle. Due to the ever-increasing complexity of systems as well the difficulty of managing and tracking the development activities, new ways to achieve systems engineering management activities are required. This paper presents a systematic approach used as a design management tool applied across systems engineering management roles. In this approach, Transdisciplinary System Development Lifecycle (TSDL) Model has been modified and integrated with Quality Function Deployment. Hereinafter, the name of the systematic approach is the Transdisciplinary Quality System Development Lifecycle (TQSDL) Model. The QFD translates the voice of customers (VOC) into measurable technical characteristics. The modified TSDL model is based on Axiomatic Design developed by Suh which is applicable to all designs: products, processes, systems and organizations. The TQSDL model aims to provide a robust structure and systematic thinking to support the implementation of systems engineering management roles. This approach ensures that the customer requirements are fulfilled as well as satisfies all the systems engineering manager roles and activities.Keywords: Axiomatic design, quality function deployment, systems engineering management, system development lifecycle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1764401 The Effect of Different Level Crop Load and Humic Substance Applications on Yield and Yield Components of Alphonse Lavallee Grape Cultivar
Authors: A. Sarıkaya, A. Akın
Abstract:
This study was carried out to investigate effects of Control (C), 18 bud/vine, 23 bud/vine, 28 bud/vine, 18 bud/vine + TKI-Humas (soil), 23 bud/vine + TKI-Humas (soil), 28 bud/vine + TKI-Humas (soil) applications on yield and yield components of Alphonse Lavallee grape cultivar. The results were obtained as the highest cluster weight (302.31 g) with 18 bud/vine application; the highest berry weight (6.31 g) with 23 bud/vine + TKI-Humas (soil) and (6.79 g) with 28 bud/vine + TKI-Humas (soil) applications; the highest maturity index (36.95) with 18 bud/vine + TKI-Humas (soil) application; the highest L* color intensity (33.99) with 18 bud/vine + TKI-Humas (soil); the highest a* color intensity (1.53) with 23 bud/vine + TKI-Humas (soil) application. The effects of applications on grape fresh yield, grape juice yield and b* color intensity values were not found statistically significant.
Keywords: Alphonse Lavallee grape cultivar, crop load, TKI-Humas substances (soil), yield, quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670400 Analysis of Genetic Variations in Camel Breeds (Camelus dromedarius)
Authors: Yasser M. Saad, Amr A. El Hanafy, Saleh A. Alkarim, Hussein A. Almehdar, Elrashdy M. Redwan
Abstract:
Camels are substantial providers of transport, milk, sport, meat, shelter, security and capital in many countries, particularly in Saudi Arabia. Inter simple sequence repeat technique was used to detect the genetic variations among some camel breeds (Majaheim, Safra, Wadah, and Hamara). Actual number of alleles, effective number of alleles, gene diversity, Shannon’s information index and polymorphic bands were calculated for each evaluated camel breed. Neighbor-joining tree that re-constructed for evaluated these camel breeds showed that, Hamara breed is distantly related from the other evaluated camels. In addition, the polymorphic sites, haplotypes and nucleotide diversity were identified for some camelidae cox1 gene sequences (obtained from NCBI). The distance value between C. bactrianus and C. dromedarius (0.072) was relatively low. Analysis of genetic diversity is an important way for conserving Camelus dromedarius genetic resources.
Keywords: Camel, genetics, ISSR, cox1, neighbor-joining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1313399 Co-payment Strategies for Chronic Medications: A Qualitative and Comparative Analysis at European Level
Authors: Pedro M. Abreu, Bruno R. Mendes
Abstract:
The management of pharmacotherapy and the process of dispensing medicines is becoming critical in clinical pharmacy due to the increase of incidence and prevalence of chronic diseases, the complexity and customization of therapeutic regimens, the introduction of innovative and more expensive medicines, the unbalanced relation between expenditure and revenue as well as due to the lack of rationalization associated with medication use. For these reasons, co-payments emerged in Europe in the 70s and have been applied over the past few years in healthcare. Co-payments lead to a rationing and rationalization of user’s access under healthcare services and products, and simultaneously, to a qualification and improvement of the services and products for the end-user. This analysis, under hospital practices particularly and co-payment strategies in general, was carried out on all the European regions and identified four reference countries, that apply repeatedly this tool and with different approaches. The structure, content and adaptation of European co-payments were analyzed through 7 qualitative attributes and 19 performance indicators, and the results expressed in a scorecard, allowing to conclude that the German models (total score of 68,2% and 63,6% in both elected co-payments) can collect more compliance and effectiveness, the English models (total score of 50%) can be more accessible, and the French models (total score of 50%) can be more adequate to the socio-economic and legal framework. Other European models did not show the same quality and/or performance, so were not taken as a standard in the future design of co-payments strategies. In this sense, we can see in the co-payments a strategy not only to moderate the consumption of healthcare products and services, but especially to improve them, as well as a strategy to increment the value that the end-user assigns to these services and products, such as medicines.
Keywords: Clinical pharmacy, co-payments, healthcare, medicines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1364398 Digital Automatic Gain Control Integrated on WLAN Platform
Authors: Emilija Miletic, Milos Krstic, Maxim Piz, Michael Methfessel
Abstract:
In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.Keywords: WLAN, AGC, RSSI, baseband processor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3952397 Coastal Ecological Sensitivity and Risk Assessment: A Case Study of Sea Level Change in Apodi River (Atlantic Ocean), Northeast Brazil
Authors: Mukesh Singh Boori, Venerando Eustáquio Amaro, Helenice Vital
Abstract:
The present study has been carried out with a view to calculate the coastal vulnerability index (CVI) to know the high and low sensitive areas and area of inundation due to future SLR. Both conventional and remotely sensed data were used and analyzed through the modelling technique. Out of the total study area, 8.26% is very high risk, 14.21% high, 9.36% medium, 22.46% low and 7.35% in the very low vulnerable category, due to costal components. Results of the inundation analysis indicate that 225.2 km² and 397 km² of the land area will be submerged by flooding at 1m and 10m inundation levels. The most severely affected sectors are expected to be the residential, industrial and recreational areas. As this coast is planned for future coastal developmental activities, measures such as industrializations, building regulation, urban growth planning and agriculture, development of an integrated coastal zone management, strict enforcement of the Coastal Regulation Zone (CRZ) Act, monitoring of impacts and further research in this regard are recommended for the study area.
Keywords: Coastal planning, land use, satellite data, vulnerability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1992396 Robust Fractional-Order PI Controller with Ziegler-Nichols Rules
Authors: Mazidah Tajjudin, Mohd Hezri Fazalul Rahiman, Norhashim Mohd Arshad, Ramli Adnan
Abstract:
In process control applications, above 90% of the controllers are of PID type. This paper proposed a robust PI controller with fractional-order integrator. The PI parameters were obtained using classical Ziegler-Nichols rules but enhanced with the application of error filter cascaded to the fractional-order PI. The controller was applied on steam temperature process that was described by FOPDT transfer function. The process can be classified as lag dominating process with very small relative dead-time. The proposed control scheme was compared with other PI controller tuned using Ziegler-Nichols and AMIGO rules. Other PI controller with fractional-order integrator known as F-MIGO was also considered. All the controllers were subjected to set point change and load disturbance tests. The performance was measured using Integral of Squared Error (ISE) and Integral of Control Signal (ICO). The proposed controller produced best performance for all the tests with the least ISE index.
Keywords: PID controller, fractional-order PID controller, PI control tuning, steam temperature control, Ziegler-Nichols tuning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3476395 Tipover Stability Enhancement of Wheeled Mobile Manipulators Using an Adaptive Neuro- Fuzzy Inference Controller System
Authors: A. Ghaffari, A. Meghdari, D. Naderi, S. Eslami
Abstract:
In this paper an algorithm based on the adaptive neuro-fuzzy controller is provided to enhance the tipover stability of mobile manipulators when they are subjected to predefined trajectories for the end-effector and the vehicle. The controller creates proper configurations for the manipulator to prevent the robot from being overturned. The optimal configuration and thus the most favorable control are obtained through soft computing approaches including a combination of genetic algorithm, neural networks, and fuzzy logic. The proposed algorithm, in this paper, is that a look-up table is designed by employing the obtained values from the genetic algorithm in order to minimize the performance index and by using this data base, rule bases are designed for the ANFIS controller and will be exerted on the actuators to enhance the tipover stability of the mobile manipulator. A numerical example is presented to demonstrate the effectiveness of the proposed algorithm.Keywords: Mobile Manipulator, Tipover Stability Enhancement, Adaptive Neuro-Fuzzy Inference Controller System, Soft Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965394 Performance Analysis of Digital Signal Processors Using SMV Benchmark
Authors: Erh-Wen Hu, Cyril S. Ku, Andrew T. Russo, Bogong Su, Jian Wang
Abstract:
Unlike general-purpose processors, digital signal processors (DSP processors) are strongly application-dependent. To meet the needs for diverse applications, a wide variety of DSP processors based on different architectures ranging from the traditional to VLIW have been introduced to the market over the years. The functionality, performance, and cost of these processors vary over a wide range. In order to select a processor that meets the design criteria for an application, processor performance is usually the major concern for digital signal processing (DSP) application developers. Performance data are also essential for the designers of DSP processors to improve their design. Consequently, several DSP performance benchmarks have been proposed over the past decade or so. However, none of these benchmarks seem to have included recent new DSP applications. In this paper, we use a new benchmark that we recently developed to compare the performance of popular DSP processors from Texas Instruments and StarCore. The new benchmark is based on the Selectable Mode Vocoder (SMV), a speech-coding program from the recent third generation (3G) wireless voice applications. All benchmark kernels are compiled by the compilers of the respective DSP processors and run on their simulators. Weighted arithmetic mean of clock cycles and arithmetic mean of code size are used to compare the performance of five DSP processors. In addition, we studied how the performance of a processor is affected by code structure, features of processor architecture and optimization of compiler. The extensive experimental data gathered, analyzed, and presented in this paper should be helpful for DSP processor and compiler designers to meet their specific design goals.Keywords: digital signal processors, DSP benchmark, instruction level parallelism, modified cyclomatic complexity, performance analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613393 Closed form Delay Model for on-Chip VLSIRLCG Interconnects for Ramp Input for Different Damping Conditions
Authors: Susmita Sahoo, Madhumanti Datta, Rajib Kar
Abstract:
Fast delay estimation methods, as opposed to simulation techniques, are needed for incremental performance driven layout synthesis. On-chip inductive effects are becoming predominant in deep submicron interconnects due to increasing clock speed and circuit complexity. Inductance causes noise in signal waveforms, which can adversely affect the performance of the circuit and signal integrity. Several approaches have been put forward which consider the inductance for on-chip interconnect modelling. But for even much higher frequency, of the order of few GHz, the shunt dielectric lossy component has become comparable to that of other electrical parameters for high speed VLSI design. In order to cope up with this effect, on-chip interconnect has to be modelled as distributed RLCG line. Elmore delay based methods, although efficient, cannot accurately estimate the delay for RLCG interconnect line. In this paper, an accurate analytical delay model has been derived, based on first and second moments of RLCG interconnection lines. The proposed model considers both the effect of inductance and conductance matrices. We have performed the simulation in 0.18μm technology node and an error of as low as less as 5% has been achieved with the proposed model when compared to SPICE. The importance of the conductance matrices in interconnect modelling has also been discussed and it is shown that if G is neglected for interconnect line modelling, then it will result an delay error of as high as 6% when compared to SPICE.Keywords: Delay Modelling; On-Chip Interconnect; RLCGInterconnect; Ramp Input; Damping; VLSI
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2050392 The Situation in the Public Procurement Market in Post-Communist Countries: The Case of the Czech Republic
Authors: Jan Pavel
Abstract:
Public procurement is one of the most important areas in the public sector that introduces a possibility for a corruption. Due to the volume of the funds that are allocated through this institution (in the EU countries it is between 10 – 15% of GDP), it has very serious implications for the efficiency of public expenditures and the overall economic efficiency as well. Indicators that are usually used for the measurement of the corruption (such as Corruption Perceptions Index - CPI) show that the worst situation is in the post-communist countries and Mediterranean countries. The presented paper uses the Czech Republic as an example of a post-communist country and analyses the factors which influence the scope of corruption in public procurement. Moreover, the paper discusses indicators that could point at the public procurement market inefficiency. The presented results show that post-communist states use the institute of public contracts significantly more than the old member countries of the continental Europe. It has a very important implication because it gives more space for corruption. Furthermore, it appears that the inefficient functioning of public procurement market is clearly manifested in the low number of bids, low level of market transparency and an ineffective control system. Some of the observed indicators are statistically significantly correlated with the CPI.Keywords: Czech Republic, Corruption, Public Procurement, Post-Communist Countries
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781391 A Study to Design a Survey to Encourage the University-Industry Relation
Authors: Lizbeth Puerta, Enselmina Marín
Abstract:
The purpose of this research is to present a survey to be applied to professors of public universities, to identify the factors that benefit or hinder the university-industry relation. Hence, this research studies some elements that integrate the variables: Knowledge management, technology management, and technology transfer; to define the existence of a relation between these variables and the industry necessities of innovation. This study is exploratory, descriptive and non-experimental. The research question is: What is the impact of the knowledge management, the technology management, and the technology transfer, made by administrative support areas of the public universities, in the industries innovation? Thus, literature review was made to identify some elements that should be considered to design a survey that allows to obtain valid information to the study variables. After this, the survey was developed, and the Content Validity Analysis was made through the Lawshe Model. The analysis indicated that the Content Validity Index (CVI) was 0.80. Hence, it was determined that this survey presents acceptable psychometric properties to be used as an evaluation tool.
Keywords: Innovation, knowledge management, technology management, technology transfer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 887390 Hydraulic Unbalance in Oil Injected Twin Rotary Screw Compressor Vibration Analysis (A Case History Related to Iran Oil Industries)
Authors: Omid A. Zargar
Abstract:
Vibration analysis of screw compressors is one of the most challenging cases in preventive maintenance. This kind of equipment considered as vibration bad actor facilities in industrial plants. On line condition monitoring systems developed too much in recent years. The high frequency vibration of ball bearings, gears, male and female caused complex fast Fourier transform (FFT) and time wave form (TWF) in screw compressors. The male and female randomly are sent to balance shop for balancing operation. This kind of operation usually caused some bending in rotors during the process that could cause further machining in such equipment. This kind of machining operation increased the vibration analysis complexity beside some process characteristic abnormality like inlet and out let pressure and temperature. In this paper mechanical principal and different type of screw compressors explained. Besides, some new condition monitoring systems and techniques for screw compressors discussed. Finally, one of the common behavior of oil injected twin rotary screw compressors called hydraulic unbalance that usually occurred after machining operation of male or female and have some specific characteristics in FFT and TWF discussed in details through a case history related to Iran oil industries.
Keywords: Vibration analysis, twin screw compressor, oil injected screw compressor, time wave form (TWF), fast Fourier transform (FFT), Hydraulic unbalance and rotor unbalance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4583389 Hierarchies Based On the Number of Cooperating Systems of Finite Automata on Four-Dimensional Input Tapes
Authors: Makoto Sakamoto, Yasuo Uchida, Makoto Nagatomo, Takao Ito, Tsunehiro Yoshinaga, Satoshi Ikeda, Masahiro Yokomichi, Hiroshi Furutani
Abstract:
In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.
Keywords: computational complexity, cooperating system, finite automaton, four-dimension, hierarchy, multihead.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890388 Effect of Dietary α-Cellulose Levels on the Growth Parameters of Nile Tilapia Oreochromis niloticus Fingerlings
Authors: Keri Alhadi Ighwela, Aziz Bin Ahmad, A. B. Abol-Munafi
Abstract:
Three purified diets were formulated using fish meal, soya bean, wheat flour, palm oil, minerals and maltose. The carbohydrate in the diets was increased from 5 to 15% by changing the cellulose content to study the effect of dietary carbohydrate level on the growth parameters of Nile tilapia Oreochromis niloticus. The protein and the lipid contents were kept constant in all the diets. The results showed that, weight gain, protein efficiency ratio, net protein utilisation and hepatosomatic index of fish fed the diet containing 15% cellulose were the lowest among all groups. Addition, the fish fed the diet containing 5% cellulose had the best specific growth rate, and food conversion ratio. While, there was no effect of the dietary cellulose levels on condition factor and survival rate. These results indicate that Nile tilapia fingerlings are able to utilize dietary cellulose does not exceed 10% in their feed for optimum growth.Keywords: Dietary cellulose, growth parameters, Nile Tilapia Oreochromis niloticus, purified diets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4701387 Cumulative Learning based on Dynamic Clustering of Hierarchical Production Rules(HPRs)
Authors: Kamal K.Bharadwaj, Rekha Kandwal
Abstract:
An important structuring mechanism for knowledge bases is building clusters based on the content of their knowledge objects. The objects are clustered based on the principle of maximizing the intraclass similarity and minimizing the interclass similarity. Clustering can also facilitate taxonomy formation, that is, the organization of observations into a hierarchy of classes that group similar events together. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form Decision If < condition> Generality
Keywords: Cumulative learning, clustering, data mining, hierarchical production rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441386 MHD Natural Convection Flow of Tangent Hyperbolic Nanofluid Past a Vertical Permeable Cone
Authors: A. Mahdy
Abstract:
In this paper, a non-similraity analysis has been presented to exhibit the two-dimensional boundary layer flow of magnetohydrodynamic (MHD) natural convection of tangent hyperbolic nanofluid nearby a vertical permeable cone in the presence of variable wall temperature impact. The mutated boundary layer nonlinear governing equations are solved numerically by the an efficient implicit finite difference procedure. For both nanofluid effective viscosity and nanofluid thermal conductivity, a number of experimental relations have been recognized. For characterizing the nanofluid, the compatible nanoparticle volume fraction model has been used. Nusselt number and skin friction coefficient are calculated for some values of Weissenberg number W, surface temperature exponent n, magnetic field parameter Mg, power law index m and Prandtl number Pr as functions of suction parameter. The rate of heat transfer from a vertical permeable cone in a regular fluid is less than that in nanofluids. A best convection has been presented by Copper nanoparticle among all the used nanoparticles.Keywords: Tangent hyperbolic nanofluid, finite difference, non-similarity, isothermal cone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 788385 Optimization Approaches for a Complex Dairy Farm Simulation Model
Authors: Jagannath Aryal, Don Kulasiri, Dishi Liu
Abstract:
This paper describes the optimization of a complex dairy farm simulation model using two quite different methods of optimization, the Genetic algorithm (GA) and the Lipschitz Branch-and-Bound (LBB) algorithm. These techniques have been used to improve an agricultural system model developed by Dexcel Limited, New Zealand, which describes a detailed representation of pastoral dairying scenarios and contains an 8-dimensional parameter space. The model incorporates the sub-models of pasture growth and animal metabolism, which are themselves complex in many cases. Each evaluation of the objective function, a composite 'Farm Performance Index (FPI)', requires simulation of at least a one-year period of farm operation with a daily time-step, and is therefore computationally expensive. The problem of visualization of the objective function (response surface) in high-dimensional spaces is also considered in the context of the farm optimization problem. Adaptations of the sammon mapping and parallel coordinates visualization are described which help visualize some important properties of the model-s output topography. From this study, it is found that GA requires fewer function evaluations in optimization than the LBB algorithm.Keywords: Genetic Algorithm, Linux Cluster, LipschitzBranch-and-Bound, Optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2114384 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning
Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar
Abstract:
As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling. The research proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling. The paper concludes the challenges and improvement directions for Deep Reinforcement Learning-based resource scheduling algorithms.
Keywords: Resource scheduling, deep reinforcement learning, distributed system, artificial intelligence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 499383 Biplot Analysis for Evaluation of Tolerance in Some Bean (Phaseolus vulgaris L.) Genotypes to Bean Common Mosaic Virus (BCMV)
Authors: S. Ghasemi, M. M. Kamelmanesh, A. Namayandeh, R. Biabanikhankahdani
Abstract:
The common bean is the most important grain legume for direct human consumption in the world and BCMV is one of the world's most serious bean diseases that can reduce yield and quality of harvested product. To determine the best tolerance index to BCMV and recognize tolerant genotypes, 2 experiments were conducted in field conditions. Twenty five common bean genotypes were sown in 2 separate RCB design with 3 replications under contamination and non-contamination conditions. On the basis of the results of indices correlations GMP, MP and HARM were determined as the most suitable tolerance indices. The results of principle components analysis indicated 2 first components totally explained 98.52% of variations among data. The first and second components were named potential yield and stress susceptible respectively. Based on the results of BCMV tolerance indices assessment and biplot analysis WA8563-4, WA8563-2 and Cardinal were the genotypes that exhibited potential seed yield under contamination and noncontamination conditions.
Keywords: Phaseolus vulgaris, BCMV, principle components analysis, bi-plot analysis, tolerance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360382 Simulation Aided Life Cycle Sustainability Assessment Framework for Manufacturing Design and Management
Authors: Mijoh A. Gbededo, Kapila Liyanage, Ilias Oraifige
Abstract:
Decision making for sustainable manufacturing design and management requires critical considerations due to the complexity and partly conflicting issues of economic, social and environmental factors. Although there are tools capable of assessing the combination of one or two of the sustainability factors, the frameworks have not adequately integrated all the three factors. Case study and review of existing simulation applications also shows the approach lacks integration of the sustainability factors. In this paper we discussed the development of a simulation based framework for support of a holistic assessment of sustainable manufacturing design and management. To achieve this, a strategic approach is introduced to investigate the strengths and weaknesses of the existing decision supporting tools. Investigation reveals that Discrete Event Simulation (DES) can serve as a rock base for other Life Cycle Analysis frameworks. Simio-DES application optimizes systems for both economic and competitive advantage, Granta CES EduPack and SimaPro collate data for Material Flow Analysis and environmental Life Cycle Assessment, while social and stakeholders’ analysis is supported by Analytical Hierarchy Process, a Multi-Criteria Decision Analysis method. Such a common and integrated framework creates a platform for companies to build a computer simulation model of a real system and assess the impact of alternative solutions before implementing a chosen solution.
Keywords: Discrete event simulation, life cycle sustainability analysis, manufacturing, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1293