Search results for: performance criteria
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14702

Search results for: performance criteria

8972 Evaluation of the Benefit of Anti-Endomysial IgA and Anti-Tissue Transglutaminase IgA Antibodies for the Diagnosis of Coeliac Disease in a University Hospital, 2010-2016

Authors: Recep Keşli, Onur Türkyılmaz, Hayriye Tokay, Kasım Demir

Abstract:

Objective: Coeliac disease (CD) is a primary small intestine disorder caused by high sensitivity to gluten which is present in the crops, characterized by inflammation in the small intestine mucosa. The goal of this study was to determine and to compare the sensitivity and specificity values of anti-endomysial IgA (EMA IgA) (IFA) and anti-tissue transglutaminase IgA (anti-tTG IgA) (ELISA) antibodies in the diagnosis of patients suspected with the CD. Methods: One thousand two hundred seventy three patients, who have applied to gastroenterology and pediatric disease polyclinics of Afyon Kocatepe University ANS Research and Practice Hospital were included into the study between 23.09.2010 and 30.05.2016. Sera samples were investigated by immunofluorescence method for EMA positiveness (Euroimmun, Luebeck, Germany). In order to determine quantitative value of Anti-tTG IgA (EIA) (Orgentec Mainz, Germany) fully automated ELISA device (Alisei, Seac, Firenze, Italy) were used. Results: Out of 1273 patients, 160 were diagnosed with coeliac disease according to ESPGHAN 2012 diagnosis criteria. Out of 160 CD patients, 120 were female, 40 were male. The EMA specificity and sensitivity were calculated as 98% and 80% respectively. Specificity and sensitivity of Anti-tTG IgA were determined as 99% and 96% respectively. Conclusion: The specificity of EMA for CD was excellent because all EMA-positive patients (n = 144) were diagnosed with CD. The presence of human anti-tTG IgA was found as a reliable marker for diagnosis and follow-up the CD. Diagnosis of CD should be established on both the clinical and serologic profiles together.

Keywords: anti-endomysial antibody, anti-tTG IgA, coeliac disease, immunofluorescence assay (IFA)

Procedia PDF Downloads 246
8971 Substantial Fatigue Similarity of a New Small-Scale Test Rig to Actual Wheel-Rail System

Authors: Meysam Naeimi, Zili Li, Roumen Petrov, Rolf Dollevoet, Jilt Sietsma, Jun Wu

Abstract:

The substantial similarity of fatigue mechanism in a new test rig for rolling contact fatigue (RCF) has been investigated. A new reduced-scale test rig is designed to perform controlled RCF tests in wheel-rail materials. The fatigue mechanism of the rig is evaluated in this study using a combined finite element-fatigue prediction approach. The influences of loading conditions on fatigue crack initiation have been studied. Furthermore, the effects of some artificial defects (squat-shape) on fatigue lives are examined. To simulate the vehicle-track interaction by means of the test rig, a three-dimensional finite element (FE) model is built up. The nonlinear material behaviour of the rail steel is modelled in the contact interface. The results of FE simulations are combined with the critical plane concept to determine the material points with the greatest possibility of fatigue failure. Based on the stress-strain responses, by employing of previously postulated criteria for fatigue crack initiation (plastic shakedown and ratchetting), fatigue life analysis is carried out. The results are reported for various loading conditions and different defect sizes. Afterward, the cyclic mechanism of the test rig is evaluated from the operational viewpoint. The results of fatigue life predictions are compared with the expected number of cycles of the test rig by its cyclic nature. Finally, the estimative duration of the experiments until fatigue crack initiation is roughly determined.

Keywords: fatigue, test rig, crack initiation, life, rail, squats

Procedia PDF Downloads 500
8970 Design Optimization of Miniature Mechanical Drive Systems Using Tolerance Analysis Approach

Authors: Eric Mxolisi Mkhondo

Abstract:

Geometrical deviations and interaction of mechanical parts influences the performance of miniature systems.These deviations tend to cause costly problems during assembly due to imperfections of components, which are invisible to a naked eye.They also tend to cause unsatisfactory performance during operation due to deformation cause by environmental conditions.One of the effective tools to manage the deviations and interaction of parts in the system is tolerance analysis.This is a quantitative tool for predicting the tolerance variations which are defined during the design process.Traditional tolerance analysis assumes that the assembly is static and the deviations come from the manufacturing discrepancies, overlooking the functionality of the whole system and deformation of parts due to effect of environmental conditions. This paper presents an integrated tolerance analysis approach for miniature system in operation.In this approach, a computer-aided design (CAD) model is developed from system’s specification.The CAD model is then used to specify the geometrical and dimensional tolerance limits (upper and lower limits) that vary component’s geometries and sizes while conforming to functional requirements.Worst-case tolerances are analyzed to determine the influenced of dimensional changes due to effects of operating temperatures.The method is used to evaluate the nominal conditions, and worse case conditions in maximum and minimum dimensions of assembled components.These three conditions will be evaluated under specific operating temperatures (-40°C,-18°C, 4°C, 26°C, 48°C, and 70°C). A case study on the mechanism of a zoom lens system is used to illustrate the effectiveness of the methodology.

Keywords: geometric dimensioning, tolerance analysis, worst-case analysis, zoom lens mechanism

Procedia PDF Downloads 154
8969 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 99
8968 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 206
8967 On the Added Value of Probabilistic Forecasts Applied to the Optimal Scheduling of a PV Power Plant with Batteries in French Guiana

Authors: Rafael Alvarenga, Hubert Herbaux, Laurent Linguet

Abstract:

The uncertainty concerning the power production of intermittent renewable energy is one of the main barriers to the integration of such assets into the power grid. Efforts have thus been made to develop methods to quantify this uncertainty, allowing producers to ensure more reliable and profitable engagements related to their future power delivery. Even though a diversity of probabilistic approaches was proposed in the literature giving promising results, the added value of adopting such methods for scheduling intermittent power plants is still unclear. In this study, the profits obtained by a decision-making model used to optimally schedule an existing PV power plant connected to batteries are compared when the model is fed with deterministic and probabilistic forecasts generated with two of the most recent methods proposed in the literature. Moreover, deterministic forecasts with different accuracy levels were used in the experiments, testing the utility and the capability of probabilistic methods of modeling the progressively increasing uncertainty. Even though probabilistic approaches are unquestionably developed in the recent literature, the results obtained through a study case show that deterministic forecasts still provide the best performance if accurate, ensuring a gain of 14% on final profits compared to the average performance of probabilistic models conditioned to the same forecasts. When the accuracy of deterministic forecasts progressively decreases, probabilistic approaches start to become competitive options until they completely outperform deterministic forecasts when these are very inaccurate, generating 73% more profits in the case considered compared to the deterministic approach.

Keywords: PV power forecasting, uncertainty quantification, optimal scheduling, power systems

Procedia PDF Downloads 66
8966 The Communication Between Visual Aesthetic Criteria of Product with User Experience and Social Sustainability: A Study of Street Furniture

Authors: Hassan Sadeghi Naeini, Mozhgan Sabzehparvar, Mahdiye Jafarnezhad, Neda Brumandi, Mohammad Parsa Sabzehparvar

Abstract:

This study aims to discover the relationship between the factors of aesthetics, user experience, and social sustainability concerning the design of street furniture and the impact of these factors on the emotional arousal of citizens to encourage and make them prefer to use street furniture. The method used in this research included extracting indicators related to each of the factors of aesthetics, user experience, and social sustainability from the articles and then selecting indicators related to the purpose of the research in consultation with industrial design experts and architects. Finally, 9 variables for aesthetics, 7 variables for user experience, and 5 variables for evaluating social sustainability were selected. To identify the effect of each of these factors on street furniture and to recognize their relationship with each other. A 10-scale prioritization questionnaire, from 1, the least amount of importance, to 10, the most amount of importance, was answered by architects and industrial designers on the “Pors Line” online platform for three consecutive weeks, and a total of 82 people answered the questionnaire. The results showed that by using aesthetic factors in the design of street furniture and having a positive impact on users’ experience of using the product, we could expect the occurrence of behavioral factors, such as creating constructive interaction and product acceptance so that the satisfaction of the user in the use of street furniture and optimal interaction in the urban environment is formed, followed by that, the requirements of social sustainability will be met.

Keywords: visual aesthetic, user experience, social sustainability, street furniture

Procedia PDF Downloads 75
8965 Seaworthiness and Liability Risks Involving Technology and Cybersecurity in Transport and Logistics

Authors: Eugene Wong, Felix Chan, Linsey Chen, Joey Cheung

Abstract:

The widespread use of technologies and cyber/digital means for complex maritime operations have led to a sharp rise in global cyber-attacks. They have generated an increasing number of liability disputes, insurance claims, and legal proceedings. An array of antiquated case law, regulations, international conventions, and obsolete contractual clauses drafted in the pre-technology era have become grossly inadequate in addressing the contemporary challenges. This paper offers a critique of the ambiguity of cybersecurity liabilities under the obligation of seaworthiness entailed in the Hague-Visby Rules, which apply either by law in a large number of jurisdictions or by express incorporation into the shipping documents. This paper also evaluates the legal and technological criteria for assessing whether a vessel is properly equipped with the latest offshore technologies for navigation and cargo delivery operations. Examples include computer applications, networks and servers, enterprise systems, global positioning systems, and data centers. A critical analysis of the carriers’ obligations to exercise due diligence in preventing or mitigating cyber-attacks is also conducted in this paper. It is hoped that the present study will offer original and crucial insights to policymakers, regulators, carriers, cargo interests, and insurance underwriters closely involved in dispute prevention and resolution arising from cybersecurity liabilities.

Keywords: seaworthiness, cybersecurity, liabilities, risks, maritime, transport

Procedia PDF Downloads 120
8964 Effect of Solvents in the Extraction and Stability of Anthocyanin from the Petals of Caesalpinia pulcherrima for Natural Dye-Sensitized Solar Cell

Authors: N. Prabavathy, R. Balasundaraprabhu, S. Shalini, Dhayalan Velauthapillai, S. Prasanna, N. Muthukumarasamy

Abstract:

Dye sensitized solar cell (DSSC) has become a significant research area due to their fundamental and scientific importance in the area of energy conversion. Synthetic dyes as sensitizer in DSSC are efficient and durable but they are costlier, toxic and have the tendency to degrade. Natural sensitizers contain plant pigments such as anthocyanin, carotenoid, flavonoid, and chlorophyll which promote light absorption as well as injection of charges to the conduction band of TiO2 through the sensitizer. But, the efficiency of natural dyes is not up to the mark mainly due to instability of the pigment such as anthocyanin. The stability issues in vitro are mainly due to the effect of solvents on extraction of anthocyanins and their respective pH. Taking this factor into consideration, in the present work, the anthocyanins were extracted from the flower Caesalpinia pulcherrima (C. pulcherrimma) with various solvents and their respective stability and pH values are discussed. The usage of citric acid as solvent to extract anthocyanin has shown good stability than other solvents. It also helps in enhancing the sensitization properties of anthocyanins with Titanium dioxide (TiO2) nanorods. The IPCE spectra show higher photovoltaic performance for dye sensitized TiO2nanorods using citric acid as solvent. The natural DSSC using citric acid as solvent shows a higher efficiency compared to other solvents. Hence citric acid performs to be a safe solvent for natural DSSC in boosting the photovoltaic performance and maintaining the stability of anthocyanins.

Keywords: Caesalpinia pulcherrima, citric acid, dye sensitized solar cells, TiO₂ nanorods

Procedia PDF Downloads 275
8963 Avoiding Gas Hydrate Problems in Qatar Oil and Gas Industry: Environmentally Friendly Solvents for Gas Hydrate Inhibition

Authors: Nabila Mohamed, Santiago Aparicio, Bahman Tohidi, Mert Atilhan

Abstract:

Qatar's one of the biggest problem in processing its natural resource, which is natural gas, is the often occurring blockage in the pipelines caused due to uncontrolled gas hydrate formation in the pipelines. Several millions of dollars are being spent at the process site to dehydrate the blockage safely by using chemical inhibitors. We aim to establish national database, which addresses the physical conditions that promotes Qatari natural gas to form gas hydrates in the pipelines. Moreover, we aim to design and test novel hydrate inhibitors that are suitable for Qatari natural gas and its processing facilities. From these perspectives we are aiming to provide more effective and sustainable reservoir utilization and processing of Qatari natural gas. In this work, we present the initial findings of a QNRF funded project, which deals with the natural gas hydrate formation characteristics of Qatari type gas in both experimental (PVTx) and computational (molecular simulations) methods. We present the data from the two fully automated apparatus: a gas hydrate autoclave and a rocking cell. Hydrate equilibrium curves including growth/dissociation conditions for multi-component systems for several gas mixtures that represent Qatari type natural gas with and without the presence of well known kinetic and thermodynamic hydrate inhibitors. Ionic liquids were designed and used for testing their inhibition performance and their DFT and molecular modeling simulation results were also obtained and compared with the experimental results. Results showed significant performance of ionic liquids with up to 0.5 % in volume with up to 2 to 4 0C inhibition at high pressures.

Keywords: gas hydrates, natural gas, ionic liquids, inhibition, thermodynamic inhibitors, kinetic inhibitors

Procedia PDF Downloads 1300
8962 Computer Aided Diagnosis Bringing Changes in Breast Cancer Detection

Authors: Devadrita Dey Sarkar

Abstract:

Regardless of the many technologic advances in the past decade, increased training and experience, and the obvious benefits of uniform standards, the false-negative rate in screening mammography remains unacceptably high .A computer aided neural network classification of regions of suspicion (ROS) on digitized mammograms is presented in this abstract which employs features extracted by a new technique based on independent component analysis. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral breast images has the potential to improve the overall performance in the detection of breast lumps. Because breast lumps can be detected reliably by computer on lateral breast mammographs, radiologists’ accuracy in the detection of breast lumps would be improved by the use of CAD, and thus early diagnosis of breast cancer would become possible. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for breast CAD may include the computerized detection of breast nodules, as well as the computerized classification of benign and malignant nodules. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with these CAD systems, which would be reliable and useful method for quantifying the similarity of a pair of images for visual comparison by radiologists.

Keywords: CAD(computer-aided design), lesions, neural network, ROS(region of suspicion)

Procedia PDF Downloads 449
8961 Discriminant Analysis of Pacing Behavior on Mass Start Speed Skating

Authors: Feng Li, Qian Peng

Abstract:

The mass start speed skating (MSSS) is a new event for the 2018 PyeongChang Winter Olympics and will be an official race for the 2022 Beijing Winter Olympics. Considering that the event rankings were based on points gained on laps, it is worthwhile to investigate the pacing behavior on each lap that directly influences the ranking of the race. The aim of this study was to detect the pacing behavior and performance on MSSS regarding skaters’ level (SL), competition stage (semi-final/final) (CS) and gender (G). All the men's and women's races in the World Cup and World Championships were analyzed in the 2018-2019 and 2019-2020 seasons. As a result, a total of 601 skaters from 36 games were observed. ANOVA for repeated measures was applied to compare the pacing behavior on each lap, and the three-way ANOVA for repeated measures was used to identify the influence of SL, CS, and G on pacing behavior and total time spent. In general, the results showed that the pacing behavior from fast to slow were cluster 1—laps 4, 8, 12, 15, 16, cluster 2—laps 5, 9, 13, 14, cluster 3—laps 3, 6, 7, 10, 11, and cluster 4—laps 1 and 2 (p=0.000). For CS, the total time spent in the final was less than the semi-final (p=0.000). For SL, top-level skaters spent less total time than the middle-level and low-level (p≤0.002), while there was no significant difference between the middle-level and low-level (p=0.214). For G, the men’s skaters spent less total time than women on all laps (p≤0.048). This study could help to coach staff better understand the pacing behavior regarding SL, CS, and G, further providing references concerning promoting the pacing strategy and decision making before and during the race.

Keywords: performance analysis, pacing strategy, winning strategy, winter Olympics

Procedia PDF Downloads 186
8960 Predictive Analytics in Oil and Gas Industry

Authors: Suchitra Chnadrashekhar

Abstract:

Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.

Keywords: hydrocarbon, information technology, SAS, predictive analytics

Procedia PDF Downloads 339
8959 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 120
8958 Verification of the Necessity of Maintenance Anesthesia with Isoflurane after Induction with Tiletamine-Zolazepam in Dogs Using the Dixon's up-and-down Method

Authors: Sonia Lachowska, Agnieszka Antonczyk, Joanna Tunikowska, Pawel Kucharski, Bartlomiej Liszka

Abstract:

Isoflurane is one of the most commonly used anaesthetic gases in veterinary medicine. Due to its numerous side effects, intravenous anaesthesia is more often used. The combination of tiletamine with zolazepam has proved to be a safe and pharmacologically beneficial combination. Analgesic effect, fast induction time, effective myorelaxation, and smooth recovery are the main advantages of this combination of drugs. In the following study, the authors verified the necessity of isoflurane to maintain anaesthesia in dogs after the use of tiletamine-zolazepam for induction. 12 dogs were selected to the group with the inclusion criteria: ASA (American Society of Anaesthesiology) I or II. Each dog received premedication intramuscularly with medetomidine-butorfanol (10 μg/kg, 0,1 mg/kg respectively). 15 minutes from premedication, preoxygenation lasting 5 minutes was started. Anaesthesia was induced with tiletamine-zolazepam at the dose of 5 mg/kg. Then the dogs were intubated and anaesthesia was maintained with isoflurane. Initially, MAC (Minimum Alveolar Concentration) was set to 0.7 vol.%. After 15 minutes equilibration, MAC was determined using Dixon’s up-and-down method. Painful stimulation including compressions of paw pad, phalange, groin area, and clamping Backhaus on skin. Hemodynamic and ventilation parameters were measured and noted in 2 minutes intervals. In this method, the positive or negative response to the noxious stimulus is estimated and then used to determine the concentration of isoflurane for next patient. The response is only assessed once in each patient. The results show that isoflurane is not necessary to maintain anaesthesia after tiletamine-zolazepam induction. This is clinically important because the side effects resulting from using isoflurane are eliminated.

Keywords: anaesthesia, dog, Isoflurane, The Dixon's up-and-down method, Tiletamine, Zolazepam

Procedia PDF Downloads 168
8957 Bracing Applications for Improving the Earthquake Performance of Reinforced Concrete Structures

Authors: Diyar Yousif Ali

Abstract:

Braced frames, besides other structural systems, such as shear walls or moment resisting frames, have been a valuable and effective technique to increase structures against seismic loads. In wind or seismic excitations, diagonal members react as truss web elements which would afford tension or compression stresses. This study proposes to consider the effect of bracing diagonal configuration on values of base shear and displacement of building. Two models were created, and nonlinear pushover analysis was implemented. Results show that bracing members enhance the lateral load performance of the Concentric Braced Frame (CBF) considerably. The purpose of this article is to study the nonlinear response of reinforced concrete structures which contain hollow pipe steel braces as the major structural elements against earthquake loads. A five-storey reinforced concrete structure was selected in this study; two different reinforced concrete frames were considered. The first system was an un-braced frame, while the last one was a braced frame with diagonal bracing. Analytical modelings of the bare frame and braced frame were realized by means of SAP 2000. The performances of all structures were evaluated using nonlinear static analyses. From these analyses, the base shear and displacements were compared. Results are plotted in diagrams and discussed extensively, and the results of the analyses showed that the braced frame was seemed to capable of more lateral load carrying and had a high value for stiffness and lower roof displacement in comparison with the bare frame.

Keywords: reinforced concrete structures, pushover analysis, base shear, steel bracing

Procedia PDF Downloads 78
8956 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)

Authors: Silvia Arrate, Waldo Salud, Eloy París

Abstract:

The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.

Keywords: cutting tools, data science, prediction, TBM, wear

Procedia PDF Downloads 34
8955 A Multi-Release Software Reliability Growth Models Incorporating Imperfect Debugging and Change-Point under the Simulated Testing Environment and Software Release Time

Authors: Sujit Kumar Pradhan, Anil Kumar, Vijay Kumar

Abstract:

The testing process of the software during the software development time is a crucial step as it makes the software more efficient and dependable. To estimate software’s reliability through the mean value function, many software reliability growth models (SRGMs) were developed under the assumption that operating and testing environments are the same. Practically, it is not true because when the software works in a natural field environment, the reliability of the software differs. This article discussed an SRGM comprising change-point and imperfect debugging in a simulated testing environment. Later on, we extended it in a multi-release direction. Initially, the software was released to the market with few features. According to the market’s demand, the software company upgraded the current version by adding new features as time passed. Therefore, we have proposed a generalized multi-release SRGM where change-point and imperfect debugging concepts have been addressed in a simulated testing environment. The failure-increasing rate concept has been adopted to determine the change point for each software release. Based on nine goodness-of-fit criteria, the proposed model is validated on two real datasets. The results demonstrate that the proposed model fits the datasets better. We have also discussed the optimal release time of the software through a cost model by assuming that the testing and debugging costs are time-dependent.

Keywords: software reliability growth models, non-homogeneous Poisson process, multi-release software, mean value function, change-point, environmental factors

Procedia PDF Downloads 62
8954 Physical and Physiological Characteristics of Young Soccer Players in Republic of Macedonia

Authors: Sanja Manchevska, Vaska Antevska, Lidija Todorovska, Beti Dejanova, Sunchica Petrovska, Ivanka Karagjozova, Elizabeta Sivevska, Jasmina Pluncevic Gligoroska

Abstract:

Introduction: A number of positive effects on the player’s physical status, including the body mass components are attributed to training process. As young soccer players grow up qualitative and quantitative changes appear and contribute to better performance. Player’s anthropometric and physiologic characteristics are recognized as important determinants of performance. Material: A sample of 52 soccer players with an age span from 9 to 14 years were divided in two groups differentiated by age. The younger group consisted of 25 boys under 11 years (mean age 10.2) and second group consisted of 27 boys with mean age 12.64. Method: The set of basic anthropometric parameters was analyzed: height, weight, BMI (Body Mass Index) and body mass components. Maximal oxygen uptake was tested using the treadmill protocol by Brus. Results: The group aged under 11 years showed the following anthropometric and physiological features: average height= 143.39cm, average weight= 44.27 kg; BMI= 18.77; Err = 5.04; Hb= 13.78 g/l; VO2=37.72 mlO2/kg. Average values of analyzed parameters were as follows: height was 163.7 cm; weight= 56.3 kg; BMI = 19.6; VO2= 39.52 ml/kg; Err=5.01; Hb=14.3g/l for the participants aged 12 to14 years. Conclusion: Physiological parameters (maximal oxygen uptake, erythrocytes and Hb) were insignificantly higher in the older group compared to the younger group. There were no statistically significant differences between analyzed anthropometric parameters among the two groups except for the basic measurements (height and weight).

Keywords: body composition, young soccer players, BMI, physical status

Procedia PDF Downloads 390
8953 Improving the Safety Performance of Workers by Assessing the Impact of Safety Culture on Workers’ Safety Behaviour in Nigeria Oil and Gas Industry: A Pilot Study in the Niger Delta Region

Authors: Efua Ehiaguina, Haruna Moda

Abstract:

Interest in the development of appropriate safety culture in the oil and gas industry has taken centre stage among stakeholders in the industry. Human behaviour has been identified as a major contributor to occupational accidents, where abnormal activities associated with safety management are taken as normal behaviour. Poor safety culture is one of the major factors that influence employee’s safety behaviour at work, which may consequently result in injuries and accidents and strengthening such a culture can improve workers safety performance. Nigeria oil and gas industry has contributed to the growth and development of the country in diverse ways. However, in terms of safety and health of workers, this industry is a dangerous place to work as workers are often exposed to occupational safety and health hazard. To ascertain the impact of employees’ safety and how it impacts health and safety compliance within the local industry, online safety culture survey targeting frontline workers within the industry was administered covering major subjects that include; perception of management commitment and style of leadership; safety communication method and its resultant impact on employees’ behaviour; employee safety commitment and training needs. The preliminary result revealed that 54% of the participants feel that there is a lack of motivation from the management to work safely. In addition, 55% of participants revealed that employers place more emphasis on work delivery over employee’s safety on the installation. It is expected that the study outcome will provide measures aimed at strengthening and sustaining safety culture in the Nigerian oil and gas industry.

Keywords: oil and gas safety, safety behaviour, safety culture, safety compliance

Procedia PDF Downloads 127
8952 Public Squares and Their Potential for Social Interactions: A Case Study of Historical Public Squares in Tehran

Authors: Asma Mehan

Abstract:

Under the thrust of technological changes, population growth and vehicular traffic, Iranian historical squares have lost their significance and they are no longer the main social nodes of the society. This research focuses on how historical public squares can inspire designers to enhance social interactions among citizens in Iranian urban context. Moreover, the recent master plan of Tehran demonstrates the lack of public spaces designed for the purpose of people’s social gatherings. For filling this gap, first the current situation of 7 selected primary historical public squares in Tehran including Sabze Meydan, Arg, Topkhaneh, Baherstan, Mokhber-al-dole, Rah Ahan and Hassan Abad have been compared. Later, the influencing elements on social interactions of the public squares such as subjective factors (human relationships and memories) and objective factors (natural and built environment) have been investigated. As a conclusion, some strategies are proposed for improving social interactions in historical public squares like; holding cultural, national, athletic and religious events, defining different and new functions in public squares’ surrounding, increasing pedestrian routs, reviving the collective memory, demonstrating the historical importance of square, eliminating visual obstacles across the square, organization the natural elements of the square, appropriate pavement for social activities. Finally, it is argued that the combination of all influencing factors which are: human interactions, natural elements and built environment criteria will lead to enhance the historical public squares’ potential for social interaction.

Keywords: historical square, Iranian public square, social interaction, Tehran

Procedia PDF Downloads 385
8951 The Feasibility of Anaerobic Digestion at 45⁰C

Authors: Nuruol S. Mohd, Safia Ahmed, Rumana Riffat, Baoqiang Li

Abstract:

Anaerobic digestion at mesophilic and thermophilic temperatures have been widely studied and evaluated by numerous researchers. Limited extensive research has been conducted on anaerobic digestion in the intermediate zone of 45°C, mainly due to the notion that limited microbial activity occurs within this zone. The objectives of this research were to evaluate the performance and the capability of anaerobic digestion at 45°C in producing class A biosolids, in comparison to a mesophilic and thermophilic anaerobic digestion system operated at 35°C and 55°C, respectively. In addition to that, the investigation on the possible inhibition factors affecting the performance of the digestion system at this temperature will be conducted as well. The 45°C anaerobic digestion systems were not able to achieve comparable methane yield and high-quality effluent compared to the mesophilic system, even though the systems produced biogas with about 62-67% methane. The 45°C digesters suffered from high acetate accumulation, but sufficient buffering capacity was observed as the pH, alkalinity and volatile fatty acids (VFA)-to-alkalinity ratio were within recommended values. The accumulation of acetate observed in 45°C systems were presumably due to the high temperature which contributed to high hydrolysis rate. Consequently, it produced a large amount of toxic salts that combined with the substrate making them not readily available to be consumed by methanogens. Acetate accumulation, even though contributed to 52 to 71% reduction in acetate degradation process, could not be considered as completely inhibitory. Additionally, at 45°C, no ammonia inhibition was observed and the digesters were able to achieve volatile solids (VS) reduction of 47.94±4.17%. The pathogen counts were less than 1,000 MPN/g total solids, thus, producing Class A biosolids.

Keywords: 45°C anaerobic digestion, acetate accumulation, class A biosolids, salt toxicity

Procedia PDF Downloads 292
8950 A Numerical Study on Semi-Active Control of a Bridge Deck under Seismic Excitation

Authors: A. Yanik, U. Aldemir

Abstract:

This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.

Keywords: bridge structures, passive control, seismic, semi-active control, viscous damping

Procedia PDF Downloads 230
8949 Cognitive Functioning and Cortisol Suppression in Major Depression in a Long-Term Perspective

Authors: Pia Berner Hansson, Robert Murison Anders Lund, Hammar Åsa

Abstract:

Major Depressive Disorder (MDD) is often associated with high levels of stress and disturbances in the Hypothalamic Pituitary Adrenal (HPA) system, yielding high levels of cortisol, in addition to cognitive dysfunction. Previous studies in this patient group have shown a relationship between cortisol profile and cognitive functioning in the acute phase of MDD and that the patients had significantly less suppression after dexamethasone administration. However, few studies have investigated this relationship over time and in phases of symptom reduction. The aim of the present study was to examine the relationships between cortisol levels after the Dexamethasone Suppression Test (DST) and cognitive function in a long term perspective in MDD patients. Patients meeting the DSM-IV criteria for a MDD were included in the study and tested in symptom reduction. A control group was included. Cortisol was measured in saliva collected with Salivette sampling devices. Saliva samples were collected 4 times during a 24 hours period over two consecutive days: at awakening, after 45 minutes, after 7 hours and at 11 pm. Dexamethasone (1.0 mg) was given on Day 1 at 11 pm. The neuropsychological test battery consisted of standardized tests measuring memory and Executive Functioning (EF). Cortisol levels did not differ significantly between patients and controls on Day 1 or Day 2. Both groups showed significant suppression after Dexamethasone. There were no correlations between cortisol levels or suppression after Dexamethasone and cognitive measures. The results indicate that the HPA-axis functioning normalizes in phases of symptom reduction in MDD patients and that there no relation between cortisol profile and cognitive functioning in memory or EF.

Keywords: depression, MDD, cortisol, suppression, cognitive functioning

Procedia PDF Downloads 317
8948 The Nexus between Manpower Training and Corporate Compliance

Authors: Timothy Wale Olaosebikan

Abstract:

The most active resource in any organization is the manpower. Every other resource remains inactive unless there is competent manpower to handle them. Manpower training is needed to enhance productivity and overall performance of the organizations. This is due to the recognition of the important role of manpower training in attainment of organizational goals. Corporate Compliance conjures visions of an incomprehensible matrix of laws and regulations that defy logic and control by even the most seasoned manpower training professionals. Similarly, corporate compliance can be viewed as one of the most significant problems faced in manpower training process for any organization, therefore, commands relevant attention and comprehension. Consequently, this study investigated the nexus between manpower training and corporate compliance. Collection of data for the study was effected through the use of questionnaire with a sample size of 265 drawn by stratified random sampling. The data were analyzed using descriptive and inferential statistics. The findings of the study show that about 75% of the respondents agree that there is a strong relationship between manpower training and corporate compliance, which brings out the organizational attainment from any training process. The findings further show that most organisation do not totally comply with the rules guiding manpower training process thereby making the process less effective on organizational performance, which may affect overall profitability. The study concludes that formulation and compliance of adequate rules and guidelines for manpower trainings will produce effective results for both employees and the organization at large. The study recommends that leaders of organizations, industries, and institutions must ensure total compliance on the part of both the employees and the organization to manpower training rules. Organizations and stakeholders should also ensure that strict policies on corporate compliance to manpower trainings form the heart of their cardinal mission.

Keywords: corporate compliance, manpower training, nexus, rules and guidelines

Procedia PDF Downloads 125
8947 Improving Vocabulary and Listening Comprehension via Watching French Films without Subtitles: Positive Results

Authors: Yelena Mazour-Matusevich, Jean-Robert Ancheta

Abstract:

This study is based on more than fifteen years of experience of teaching a foreign language, in my case French, to the English-speaking students. It represents a qualitative research on foreign language learners’ reaction and their gains in terms of vocabulary and listening comprehension through repeatedly viewing foreign feature films with the original sountrack but without English subtitles. The initial idea emerged upon realization that the first challenge faced by my students when they find themselves in a francophone environment has been their lack of listening comprehension. Their inability to understand colloquial speech affects not only their academic performance, but their psychological health as well. To remedy this problem, I have designed and applied for many years my own teaching method based on one particular French film, exceptionally suited, for the reasons described in detail in the paper, for the intermediate-advanced level foreign language learners. This project, conducted together with my undergraduate assistant and mentoree J-R Ancheta, aims at showing how the paralinguistic features, such as characters’ facial expressions, settings, music, historical background, images provided before the actual viewing, etc., offer crucial support and enhance students’ listening comprehension. The study, based on students’ interviews, also offers special pedagogical techniques, such as ‘anticipatory’ vocabulary lists and exercises, drills, quizzes and composition topics that have proven to boost students’ performance. For this study, only the listening proficiency and vocabulary gains of the interviewed participants were assessed.

Keywords: comprehension, film, listening, subtitles, vocabulary

Procedia PDF Downloads 606
8946 Response of Local Cowpea to Intra Row Spacing and Weeding Regimes in Yobe State, Nigeria

Authors: A. G. Gashua, T. T. Bello, I. Alhassan, K. K. Gwiokura

Abstract:

Weeds are known to interfere seriously with crop growth, thereby affecting the productivity and quality of crops. Crops are also known to compete for natural growth resources if they are not adequately spaced, also affecting the performance of the growing crop. Farmers grow cowpea in mixtures with cereals and this is known to affect its yield. For this reason, a field experiment was conducted at Yobe State College of Agriculture Gujba, Damaturu station in the 2014 and 2015 rainy seasons to determine the appropriate intra row spacing and weeding regime for optimum growth and yield of cowpea (Vigna unguiculata L.) in pure stand in Sudan Savanna ecology. The treatments consist of three levels of spacing within rows (20 cm, 30 cm and 40 cm) and four weeding regimes (none, once at 3 weeks after sowing (WAS), twice at 3 and 6WAS, thrice at 3WAS, 6WAS and 9WAS); arranged in a Randomized Complete Block Design (RCBD) and replicated three times. The variety used was the local cowpea variety (white, early and spreading) commonly grown by farmers. The growth and yield data were collected and subjected to analysis of variance using SAS software, and the significant means were ranked by Students Newman Keul’s test (SNK). The findings of this study revealed better crop performance in 2015 than in 2014 despite poor soil condition. Intra row spacing significantly influenced vegetative growth especially the number of main branches, leaves and canopy spread at 6WAS and 9WAS with the highest values obtained at wider spacing (40 cm). The values obtained in 2015 doubled those obtained in 2014 in most cases. Spacing also significantly affected the number of pods in 2015, seed weight in both years and grain yield in 2014 with the highest values obtained when the crop was spaced at 30-40 cm. Similarly, weeding regime significantly influenced almost all the growth attributes of cowpea with higher values obtained from where cowpea was weeded three times at 3-week intervals, though statistically similar results were obtained even from where cowpea was weeded twice. Weeding also affected the entire yield and yield components in 2015 with the highest values obtained with increase weeding. Based on these findings, it is recommended that spreading cowpea varieties should be grown at 40 cm (or wider spacing) within rows and be weeded twice at three-week intervals for better crop performance in related ecologies.

Keywords: intra-row spacing, local cowpea, Nigeria, weeding

Procedia PDF Downloads 201
8945 Study on Mitigation Measures of Gumti Hydro Power Plant Using Analytic Hierarchy Process and Concordance Analysis Techniques

Authors: K. Majumdar, S. Datta

Abstract:

Electricity is recognized as fundamental to industrialization and improving the quality of life of the people. Harnessing the immense untapped hydropower potential in Tripura region opens avenues for growth and provides an opportunity to improve the well-being of the people of the region, while making substantial contribution to the national economy. Gumti hydro power plant generates power to mitigate the crisis of power in Tripura, India. The first unit of hydro power plant (5 MW) was commissioned in June 1976 & another two units of 5 MW was commissioned simultaneously. But out of 15 MW capacity at present only 8-9 MW power is produced from Gumti hydro power plant during rainy season. But during lean season the production reduces to 0.5 MW due to shortage of water. Now, it is essential to implement some mitigation measures so that the further atrocities can be prevented and originality will be possible to restore. The decision making ability of the Analytic Hierarchy Process (AHP) and Concordance Analysis Techniques (CAT) are utilized to identify the better decision or solution to the present problem. Some related attributes are identified by the method of surveying within the experts and the available reports and literatures. Similar criteria are removed and ultimately seven relevant ones are identified. All the attributes are compared with each other and rated accordingly to their importance over the other with the help of Pair wise Comparison Matrix. In the present investigation different mitigation measures are identified and compared to find the best suitable alternative which can solve the present uncertainties involving the existence of the Gumti Hydro Power Plant.

Keywords: concordance analysis techniques, analytic hierarchy process, hydro power

Procedia PDF Downloads 338
8944 Titanium Nitride @ Nitrogen-doped Carbon Nanocage as High-performance Cathodes for Aqueous Zn-ion Hybrid Supercapacitors

Authors: Ye Ling, Ruan Haihui

Abstract:

Aqueous Zn-ion hybrid supercapacitors (AZHSCs) pertain to a new type of electrochemical energy storage device that has received considerable attention. They integrate the advantages of high-energy Zn-ion batteries and high-power supercapacitors to meet the demand for low-cost, long-term durability, and high safety. Nevertheless, the challenge caused by the finite ion adsorption/desorption capacity of carbon electrodes gravely limits their energy densities. This work describes titanium nitride@nitrogen-doped carbon nanocage (TiN@NCNC) composite cathodes for AZHSCs to achieve a greatly improved energy density, and the composites can be facile synthesized based on the calcination of a mixture of tetrabutyl titanate and zeolitic imidazolate framework-8 in argon atmosphere. The resulting composites are featured by the ultra-fine TiN particles dispersed uniformly on the NCNC surfaces, enhancing the Zn2+ storage capabilities. Using TiN@NCNC cathodes, the AZHSCs can operate stably with a high energy density of 154 Wh kg-¹ at a specific power of 270 W kg-¹ and achieve a remarkable capacity retention of 88.9% after 104 cycles at 5 A g-¹. At an extreme specific power of 8.7 kW kg-1, the AZHSCs can retain an energy density of 97.2 Wh kg-1. With these results, we stress that the TiN@NCNC cathodes render high-performance AZHSCs, and the facile one-pot method can easily be scaled up, which enables AZHSCs a new energy-storage component for managing intermitted renewable energy sources.

Keywords: Zn-ion hybrid supercapacitors, ion absorption/desorption reactions, titanium nitride, zeolitic imidazolate framework-8

Procedia PDF Downloads 23
8943 Partnership Oriented Innovation Alliance Strategy Based on Market Feedback

Authors: Victor Romanov, Daria Efimenko

Abstract:

The focus on innovation in modern economy is the main factor in surviving business in a competitive environment. The innovations are based on the search and use of knowledge in a global context. Nowadays consumers and market demand are the main innovation drivers. This leads to build a business as a system with feedback, promptly restructuring production and innovation implementation in response to market demands. In modern knowledge economy, because of speed of technical progress, the product's lifecycle became much shorter, what makes more stringent requirements for innovation implementation on the enterprises of and therefore the possibility for enterprise for receiving extra income is decreasing. This circumstance imposes additional requirements for the replacement of obsolete products and the prompt release of innovative products to the market. The development of information technologies has led to the fact that only in the conditions of partnership and knowledge sharing with partners it is possible to update products quickly for innovative products. Many companies pay attention to updating innovations through the search for new partners, but the task of finding new partners presents some difficulties. The search for a suitable one includes several stages such as: determining the moment of innovation-critical, introducing a search, identifying search criteria, justifying and deciding on the choice of a partner. No less important is the question of how to manage an innovative product in response to a changing market. The article considers the problems of information support for the search for the source of innovation and partnership to decrease the time for implementation of novelty products.

Keywords: partnership, novelty, market feedback, alliance

Procedia PDF Downloads 182