Search results for: threshold estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2559

Search results for: threshold estimation

909 Engineering Seismological Studies in and around Zagazig City, Sharkia, Egypt

Authors: M. El-Eraki, A. A. Mohamed, A. A. El-Kenawy, M. S. Toni, S. I. Mustafa

Abstract:

The aim of this paper is to study the ground vibrations using Nakamura technique to evaluate the relation between the ground conditions and the earthquake characteristics. Microtremor measurements were carried out at 55 sites in and around Zagazig city. The signals were processed using horizontal to vertical spectral ratio (HVSR) technique to estimate the fundamental frequencies of the soil deposits and its corresponding H/V amplitude. Seismic measurements were acquired at nine sites for recording the surface waves. The recorded waveforms were processed using the multi-channel analysis of surface waves (MASW) method to infer the shear wave velocity profile. The obtained fundamental frequencies were found to be ranging from 0.7 to 1.7 Hz and the maximum H/V amplitude reached 6.4. These results together with the average shear wave velocity in the surface layers were used for the estimation of the thickness of the upper most soft cover layers (depth to bedrock). The sediment thickness generally increases at the northeastern and southwestern parts of the area, which is in good agreement with the local geological structure. The results of this work showed the zones of higher potential damage in the event of an earthquake in the study area.

Keywords: ambient vibrations, fundamental frequency, surface waves, zagazig

Procedia PDF Downloads 264
908 Evaluation of Deteriorated Fired Clay Bricks Based on Schmidt Hammer Tests

Authors: Laurent Debailleux

Abstract:

Although past research has focused on parameters influencing the vulnerability of brick and its decay, in practice ancient fired clay bricks are usually replaced without any particular assessment of their characteristics. This paper presents results of non-destructive Schmidt hammer tests performed on ancient fired clay bricks sampled from historic masonry. Samples under study were manufactured between the 18th and 20th century and came from facades and interior walls. Tests were performed on three distinct brick surfaces, depending on their position within the masonry unit. Schmidt hammer tests were carried out in order to measure the mean rebound value (Rn), which refers to the resistance of the surface to successive impacts of the hammer plunger tip. Results indicate that rebound values increased with successive impacts at the same point. Therefore, mean Schmidt hammer rebound values (Rn), limited to the first impact on a surface minimises the estimation of compressive strength. In addition, the results illustrate that this technique is sensitive enough to measure weathering differences, even for different surfaces of a particular sample. Finally, the paper also highlights the relevance of considering the position of the brick within the masonry when conducting particular assessments of the material’s strength.

Keywords: brick, non-destructive tests, rebound number, Schmidt hammer, weathering grade

Procedia PDF Downloads 142
907 Effects of Different Thermal Processing Routes and Their Parameters on the Formation of Voids in PA6 Bonded Aluminum Joints

Authors: Muhammad Irfan, Guillermo Requena, Jan Haubrich

Abstract:

Adhesively bonded aluminum joints are common in automotive and aircraft industries and are one of the enablers of lightweight construction to minimize the carbon emissions during transportation for a sustainable life. This study is focused on the effects of two thermal processing routes, i.e., by direct and induction heating, and their parameters on void formation in PA6 bonded aluminum EN-AW6082 joints. The joints were characterized microanalytically as well as by lap shear experiments. The aging resistance of the joints was studied by accelerated aging tests at 80°C hot water. It was found that the processing of single lap joints by direct heating in a convection oven causes the formation of a large number of voids in the bond line. The formation of voids in the convection oven was due to longer processing times and was independent of any surface pretreatments of the metal as well as the processing temperature. However, when processing at low temperatures, a large number of small-sized voids were observed under the optical microscope, and they were larger in size but reduced in numbers at higher temperatures. An induction heating process was developed, which not only successfully reduced or eliminated the voids in PA6 bonded joints but also reduced the processing times for joining significantly. Consistent with the trend in direct heating, longer processing times and higher temperatures in induction heating also led to an increased formation of voids in the bond line. Subsequent single lap shear tests revealed that the increasing void contents led to a 21% reduction in lap shear strengths (i.e., from ~47 MPa for induction heating to ~37 MPa for direct heating). Also, there was a 17% reduction in lap shear strengths when the consolidation temperature was raised from 220˚C to 300˚C during induction heating. However, below a certain threshold of void contents, there was no observable effect on the lap shear strengths as well as on hydrothermal aging resistance of the joints consolidated by the induction heating process.

Keywords: adhesive, aluminium, convection oven, induction heating, mechanical properties, nylon6 (PA6), pretreatment, void

Procedia PDF Downloads 99
906 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering

Authors: Emiel Caron

Abstract:

Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.

Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics

Procedia PDF Downloads 172
905 Antioxidant Effects of Withania Somnifera (Ashwagandha) on Brain

Authors: Manju Lata Sharma

Abstract:

Damage to cells caused by free radicals is believed to play a central role in the ageing process and in disease progression. Withania somnifera is widely used in ayurvedic medicine, and it is one of the ingredients in many formulations to increase energy, improve overall health and longevity and prevent disease. Withania somnifera possesses antioxidative properties. The antioxdant activity of Withania somnifera consisting of an equimolar concentration of active principles of sitoindoside VII-X and withaferin A. The antioxidant effect of Withania somnifera extract was investigated on lipid peroxidation (LPO), superoxide dismutase (SOD) and catalase (CAT) activity in mice. Aim: To study the antioxidant activity of an extract of Withania somnifera leaf against a mice model of chronic stress. Healthy swiss albino mice (3-4 months old) selected from an inbred colony were divided in to 6 groups. Biochemical estimation revealed that stress induced a significant change in SOD, LPO, CAT AND GPX. These stress induced perturbations were attenuated Withania somnifera (50 and 100 mg/kg BW). Result: Withania somnifera tended to normalize the augmented SOD and LPO activities and enhanced the activities of CAT and GPX. The result indicates that treatment with an alcoholic extract of Withania somnifera produced a significant decrease in LPO ,and an increase in both SOD and CAT in brain mice. This indicates that Withania somnifera extract possesses free radical scavenging activity .

Keywords: Withania somnifera, antioxidant, lipid peroxidation, brain

Procedia PDF Downloads 347
904 Urban Areas Management in Developing Countries: Analysis of the Urban Areas Crossed with Risk of Storm Water Drains, Aswan-Egypt

Authors: Omar Hamdy, Schichen Zhao, Hussein Abd El-Atty, Ayman Ragab, Muhammad Salem

Abstract:

One of the most risky areas in Aswan is Abouelreesh, which is suffering from flood disasters, as heavy deluge inundates urban areas causing considerable damage to buildings and infrastructure. Moreover, the main problem was the urban sprawl towards this risky area. This paper aims to identify the urban areas located in the risk areas prone to flash floods. Analyzing this phenomenon needs a lot of data to ensure satisfactory results; however, in this case the official data and field data were limited, and therefore, free sources of satellite data were used. This paper used ArcGIS tools to obtain the storm water drains network by analyzing DEM files. Additionally, historical imagery in Google Earth was studied to determine the age of each building. The last step was to overlay the urban area layer and the storm water drains layer to identify the vulnerable areas. The results of this study would be helpful to urban planners and government officials to make the disasters risk estimation and develop primary plans to recover the risky area, especially urban areas located in torrents.

Keywords: risk area, DEM, storm water drains, GIS

Procedia PDF Downloads 434
903 Mathematical Model for Flow and Sediment Yield Estimation on Tel River Basin, India

Authors: Santosh Kumar Biswal, Ramakar Jha

Abstract:

Soil erosion is a slow and continuous process and one of the prominent problems across the world leading to many serious problems like loss of soil fertility, loss of soil structure, poor internal drainage, sedimentation deposits etc. In this paper remote sensing and GIS based methods have been applied for the determination of soil erosion and sediment yield. Tel River basin which is the second largest tributary of the river Mahanadi laying between latitude 19° 15' 32.4"N and, 20° 45' 0"N and longitude 82° 3' 36"E and 84° 18' 18"E chosen for the present study. The catchment was discretized into approximately homogeneous sub-areas (grid cells) to overcome the catchment heterogeneity. The gross soil erosion in each cell was computed using Universal Soil Loss Equation (USLE). Various parameters for USLE was determined as a function of land topography, soil texture, land use/land cover, rainfall, erosivity and crop management and practice in the watershed. The concept of transport limited accumulation was formulated and the transport capacity maps were generated. The gross soil erosion was routed to the catchment outlet. This study can help in recognizing critical erosion prone areas of the study basin so that suitable control measures can be implemented.

Keywords: Universal Soil Loss Equation (USLE), GIS, land use, sediment yield,

Procedia PDF Downloads 289
902 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 27
901 Precise Identification of Clustered Regularly Interspaced Short Palindromic Repeats-Induced Mutations via Hidden Markov Model-Based Sequence Alignment

Authors: Jingyuan Hu, Zhandong Liu

Abstract:

CRISPR genome editing technology has transformed molecular biology by accurately targeting and altering an organism’s DNA. Despite the state-of-art precision of CRISPR genome editing, the imprecise mutation outcome and off-target effects present considerable risk, potentially leading to unintended genetic changes. Targeted deep sequencing, combined with bioinformatics sequence alignment, can detect such unwanted mutations. Nevertheless, the classical method, Needleman-Wunsch (NW) algorithm may produce false alignment outcomes, resulting in inaccurate mutation identification. The key to precisely identifying CRISPR-induced mutations lies in determining optimal parameters for the sequence alignment algorithm. Hidden Markov models (HMM) are ideally suited for this task, offering flexibility across CRISPR systems by leveraging forward-backward algorithms for parameter estimation. In this study, we introduce CRISPR-HMM, a statistical software to precisely call CRISPR-induced mutations. We demonstrate that the software significantly improves precision in identifying CRISPR-induced mutations compared to NW-based alignment, thereby enhancing the overall understanding of the CRISPR gene-editing process.

Keywords: CRISPR, HMM, sequence alignment, gene editing

Procedia PDF Downloads 26
900 Ultrasound Therapy: Amplitude Modulation Technique for Tissue Ablation by Acoustic Cavitation

Authors: Fares A. Mayia, Mahmoud A. Yamany, Mushabbab A. Asiri

Abstract:

In recent years, non-invasive Focused Ultrasound (FU) has been utilized for generating bubbles (cavities) to ablate target tissue by mechanical fractionation. Intensities >10 kW/cm² are required to generate the inertial cavities. The generation, rapid growth, and collapse of these inertial cavities cause tissue fractionation and the process is called Histotripsy. The ability to fractionate tissue from outside the body has many clinical applications including the destruction of the tumor mass. The process of tissue fractionation leaves a void at the treated site, where all the affected tissue is liquefied to particles at sub-micron size. The liquefied tissue will eventually be absorbed by the body. Histotripsy is a promising non-invasive treatment modality. This paper presents a technique for generating inertial cavities at lower intensities (< 1 kW/cm²). The technique (patent pending) is based on amplitude modulation (AM), whereby a low frequency signal modulates the amplitude of a higher frequency FU wave. Cavitation threshold is lower at low frequencies; the intensity required to generate cavitation in water at 10 kHz is two orders of magnitude lower than the intensity at 1 MHz. The Amplitude Modulation technique can operate in both continuous wave (CW) and pulse wave (PW) modes, and the percentage modulation (modulation index) can be varied from 0 % (thermal effect) to 100 % (cavitation effect), thus allowing a range of ablating effects from Hyperthermia to Histotripsy. Furthermore, changing the frequency of the modulating signal allows controlling the size of the generated cavities. Results from in vitro work demonstrate the efficacy of the new technique in fractionating soft tissue and solid calcium carbonate (Chalk) material. The technique, when combined with MR or Ultrasound imaging, will present a precise treatment modality for ablating diseased tissue without affecting the surrounding healthy tissue.

Keywords: focused ultrasound therapy, histotripsy, inertial cavitation, mechanical tissue ablation

Procedia PDF Downloads 297
899 Cytology and Flow Cytometry of Three Japanese Drosera Species

Authors: Santhita Tungkajiwangkoon, Yoshikazu Hoshi

Abstract:

Three Japaneses Drosera species are the good model to study genome organization with highly specialized morphological group for insect trapping, and has revealed anti-inflammatory, and antibacterial effects, so there must be a reason for botanists are so appealing in these plants. Cytology and Flow cytometry were used to investigate the genetic stability and ploidy estimation in three related species. The cytological and Flow cytometry analysis were done in Drosera rotundifolia L., Drosera spatulata Labill and Drosera tokaiensis. The cytological studies by fluorescence staining (DAPI) showed that D. tokaiensis was an alloploid (2n=6x=60, hexaploid) which is a natural hybrid polyploids of D. rotundifolia and D. spatulata. D. rotundifolia was a diploid with the middle size of metaphase chromosomes (2n=2x=20) as a paternal origin and D. spatulata was a tetraploid with small size of metaphase chromosome (2n=4x=40) as a maternal origin. We confirmed by Flow cytometry analysis to determine the ploidy level and DNA content of the plants. The 2C-DNA values of D. rotundiflolia were 2.8 pg, D. spatulata was 1.6 pg and D. tokaiensis was 3.9 pg. However, 2C- DNA values of D. tokaiensis should be related from their parents but in the present study the 2C-DNA values of D. tokaiensis was no relation from the theoretical of hybrids representing additive parental. Possibility of D. tokaiensis is a natural hybrid, which is also hybridization in natural evolution can cause the genome reduction in plant.

Keywords: drosera, hybrid, cytology, flow cytometry

Procedia PDF Downloads 362
898 Maturity Transformation Risk Factors in Islamic Banking: An Implication of Basel III Liquidity Regulations

Authors: Haroon Mahmood, Christopher Gan, Cuong Nguyen

Abstract:

Maturity transformation risk is highlighted as one of the major causes of recent global financial crisis. Basel III has proposed new liquidity regulations for transformation function of banks and hence to monitor this risk. Specifically, net stable funding ratio (NSFR) is introduced to enhance medium- and long-term resilience against liquidity shocks. Islamic banking is widely accepted in many parts of the world and contributes to a significant portion of the financial sector in many countries. Using a dataset of 68 fully fledged Islamic banks from 11 different countries, over a period from 2005 – 2014, this study has attempted to analyze various factors that may significantly affect the maturity transformation risk in these banks. We utilize 2-step system GMM estimation technique on unbalanced panel and find bank capital, credit risk, financing, size and market power are most significant among the bank specific factors. Also, gross domestic product and inflation are the significant macro-economic factors influencing this risk. However, bank profitability, asset efficiency, and income diversity are found insignificant in determining the maturity transformation risk in Islamic banking model.

Keywords: Basel III, Islamic banking, maturity transformation risk, net stable funding ratio

Procedia PDF Downloads 392
897 Indian Road Traffic Flow Analysis Using Blob Tracking from Video Sequences

Authors: Balaji Ganesh Rajagopal, Subramanian Appavu alias Balamurugan, Ayyalraj Midhun Kumar, Krishnan Nallaperumal

Abstract:

Intelligent Transportation System is an Emerging area to solve multiple transportation problems. Several forms of inputs are needed in order to solve ITS problems. Advanced Traveler Information System (ATIS) is a core and important ITS area of this modern era. This involves travel time forecasting, efficient road map analysis and cost based path selection, Detection of the vehicle in the dynamic conditions and Traffic congestion state forecasting. This Article designs and provides an algorithm for traffic data generation which can be used for the above said ATIS application. By inputting the real world traffic situation in the form of video sequences, the algorithm determines the Traffic density in terms of congestion, number of vehicles in a given path which can be fed for various ATIS applications. The Algorithm deduces the key frame from the video sequences and follows the Blob detection, Identification and Tracking using connected components algorithm to determine the correlation between the vehicles moving in the real road scene.

Keywords: traffic transportation, traffic density estimation, blob identification and tracking, relative velocity of vehicles, correlation between vehicles

Procedia PDF Downloads 490
896 Analyzing the Effects of Supply and Demand Shocks in the Spanish Economy

Authors: José M Martín-Moreno, Rafaela Pérez, Jesús Ruiz

Abstract:

In this paper we use a small open economy Dynamic Stochastic General Equilibrium Model (DSGE) for the Spanish economy to search for a deeper characterization of the determinants of Spain’s macroeconomic fluctuations throughout the period 1970-2008. In order to do this, we distinguish between tradable and non-tradable goods to take into account the fact that the presence of non-tradable goods in this economy is one of the largest in the world. We estimate a DSGE model with supply and demand shocks (sectorial productivity, public spending, international real interest rate and preferences) using Kalman Filter techniques. We find the following results. First of all, our variance decomposition analysis suggests that 1) the preference shock basically accounts for private consumption volatility, 2) the idiosyncratic productivity shock accounts for non-tradable output volatility, and 3) the sectorial productivity shock along with the international interest rate both greatly account for tradable output. Secondly, the model closely replicates the time path observed in the data for the Spanish economy and finally, the model captures the main cyclical qualitative features of this economy reasonably well.

Keywords: business cycle, DSGE models, Kalman filter estimation, small open economy

Procedia PDF Downloads 392
895 Sensor Monitoring of the Concentrations of Different Gases Present in Synthesis of Ammonia Based on Multi-Scale Entropy and Multivariate Statistics

Authors: S. Aouabdi, M. Taibi

Abstract:

The supervision of chemical processes is the subject of increased development because of the increasing demands on reliability and safety. An important aspect of the safe operation of chemical process is the earlier detection of (process faults or other special events) and the location and removal of the factors causing such events, than is possible by conventional limit and trend checks. With the aid of process models, estimation and decision methods it is possible to also monitor hundreds of variables in a single operating unit, and these variables may be recorded hundreds or thousands of times per day. In the absence of appropriate processing method, only limited information can be extracted from these data. Hence, a tool is required that can project the high-dimensional process space into a low-dimensional space amenable to direct visualization, and that can also identify key variables and important features of the data. Our contribution based on powerful techniques for development of a new monitoring method based on multi-scale entropy MSE in order to characterize the behaviour of the concentrations of different gases present in synthesis and soft sensor based on PCA is applied to estimate these variables.

Keywords: ammonia synthesis, concentrations of different gases, soft sensor, multi-scale entropy, multivarite statistics

Procedia PDF Downloads 314
894 Applying Multiple Kinect on the Development of a Rapid 3D Mannequin Scan Platform

Authors: Shih-Wen Hsiao, Yi-Cheng Tsao

Abstract:

In the field of reverse engineering and creative industries, applying 3D scanning process to obtain geometric forms of the objects is a mature and common technique. For instance, organic objects such as faces and nonorganic objects such as products could be scanned to acquire the geometric information for further application. However, although the data resolution of 3D scanning device is increasing and there are more and more abundant complementary applications, the penetration rate of 3D scanning for the public is still limited by the relative high price of the devices. On the other hand, Kinect, released by Microsoft, is known for its powerful functions, considerably low price, and complete technology and database support. Therefore, related studies can be done with the applying of Kinect under acceptable cost and data precision. Due to the fact that Kinect utilizes optical mechanism to extracting depth information, limitations are found due to the reason of the straight path of the light. Thus, various angles are required sequentially to obtain the complete 3D information of the object when applying a single Kinect for 3D scanning. The integration process which combines the 3D data from different angles by certain algorithms is also required. This sequential scanning process costs much time and the complex integration process often encounter some technical problems. Therefore, this paper aimed to apply multiple Kinects simultaneously on the field of developing a rapid 3D mannequin scan platform and proposed suggestions on the number and angles of Kinects. In the content, a method of establishing the coordination based on the relation between mannequin and the specifications of Kinect is proposed, and a suggestion of angles and number of Kinects is also described. An experiment of applying multiple Kinect on the scanning of 3D mannequin is constructed by Microsoft API, and the results show that the time required for scanning and technical threshold can be reduced in the industries of fashion and garment design.

Keywords: 3D scan, depth sensor, fashion and garment design, mannequin, multiple Kinect sensor

Procedia PDF Downloads 347
893 Simultaneous Extraction and Estimation of Steroidal Glycosides and Aglycone of Solanum

Authors: Karishma Chester, Sarvesh Paliwal, Sayeed Ahmad

Abstract:

Solanumnigrum L. (Family: Solanaceae), is an important Indian medicinal plant and have been used in various traditional formulations for hepato-protection. It has been reported to contain significant amount of steroidal glycosides such as solamargine and solasonine as well as their aglycone part solasodine. Being important pharmacologically active metabolites of several members of Solanaceae these markers have been attempted various times for their extraction and quantification but separately for glycoside and aglycone part because of their opposite polarity. Here, we propose for the first time simultaneous extraction and quantification of aglycone (solasodine)and glycosides (solamargine and solasonine) inleaves and berries of S.nigrumusing solvent extraction followed by HPTLC analysis. Simultaneous extraction was carried out by sonication in mixture of chloroform and methanol as solvent. The quantification was done using silica gel 60F254HPTLC plates as stationary phase and chloroform: methanol: acetone: 0.5 % ammonia (7: 2.5: 1: 0.4 v/v/v/v) as mobile phaseat 400 nm, after derivatization with an isaldehydesul furic acid reagent. The method was validated as per ICH guideline for calibration, linearity, precision, recovery, robustness, specificity, LOD, and LOQ. The statistical data obtained for validation showed that method can be used routinely for quality control of various solanaceous drugs reported for these markers as well as traditional formulations containing those plants as an ingredient.

Keywords: solanumnigrum, solasodine, solamargine, solasonine, quantification

Procedia PDF Downloads 312
892 The Impact of Research and Development Cooperation Partner Diversity, Knowledge Source Diversity and Knowledge Source Network Embeddedness on Radical Innovation: Direct Relationships and Interaction with Non-Price Competition

Authors: Natalia Strobel, Jan Kratzer

Abstract:

In this paper, we test whether different types of research and development (R&D) alliances positively impact the radical innovation performance of firms. We differentiate between the R&D alliances without extern R&D orders and embeddedness in knowledge source network. We test the differences between the domestically diversified R&D alliances and R&D alliances diversified abroad. Moreover, we test how non-price competition influences the impact of domestically diversified R&D alliances, and R&D alliance diversified abroad on radical innovation performance. Our empirical analysis is based on the comprehensive Swiss innovation panel, which allowed us to study 3520 firms between the years between 1996 and 2011 in 3 years intervals. We analyzed the data with a linear estimation with Swamy-Aurora transformation using plm package in R software. Our results show as hypothesized a positive impact of R&D alliances diversity abroad as well as domestically on radical innovation performance. The effect of non-price interaction is in contrast to our hypothesis, not significant. This suggests that diversity of R&D alliances is highly advantageous independent of non-price competition.

Keywords: R&D alliances, partner diversity, knowledge source diversity, non-price competition, absorptive capacity

Procedia PDF Downloads 339
891 Feasibility Studies through Quantitative Methods: The Revamping of a Tourist Railway Line in Italy

Authors: Armando Cartenì, Ilaria Henke

Abstract:

Recently, the Italian government has approved a new law for public contracts and has been laying the groundwork for restarting a planning phase. The government has adopted the indications given by the European Commission regarding the estimation of the external costs within the Cost-Benefit Analysis, and has been approved the ‘Guidelines for assessment of Investment Projects’. In compliance with the new Italian law, the aim of this research was to perform a feasibility study applying quantitative methods regarding the revamping of an Italian tourist railway line. A Cost-Benefit Analysis was performed starting from the quantification of the passengers’ demand potentially interested in using the revamped rail services. The benefits due to the external costs reduction were also estimated (quantified) in terms of variations (with respect to the not project scenario): climate change, air pollution, noises, congestion, and accidents. Estimations results have been proposed in terms of the Measure of Effectiveness underlying a positive Net Present Value equal to about 27 million of Euros, an Internal Rate of Return much greater the discount rate, a benefit/cost ratio equal to 2 and a PayBack Period of 15 years.

Keywords: cost-benefit analysis, evaluation analysis, demand management, external cost, transport planning, quality

Procedia PDF Downloads 196
890 Associated Risks of Spontaneous Lung Collapse after Shoulder Surgery: A Literature Review

Authors: Fiona Bei Na Tan, Glen Wen Kiat Ho, Ee Leen Liow, Li Yin Tan, Sean Wei Loong Ho

Abstract:

Background: Shoulder arthroscopy is an increasingly common procedure. Pneumothorax post-shoulder arthroscopy is a rare complication. Objectives: Our aim is to highlight a case report of pneumothorax post shoulder arthroscopy and to conduct a literature review to evaluate the possible risk factors associated with developing a pneumothorax during or after shoulder arthroscopy. Case Report: We report the case of a 75-year-old male non-smoker who underwent left shoulder arthroscopy without regional anaesthesia and in the left lateral position. The general anaesthesia and surgery were uncomplicated. The patient was desaturated postoperatively and was found to have a pneumothorax on examination and chest X-ray. A chest tube drain was inserted promptly into the right chest. He had an uncomplicated postoperative course. Methods: PubMed Medline and Cochrane database search was carried out using the terms shoulder arthroplasty, pneumothorax, pneumomediastinum, and subcutaneous emphysema. We selected full-text articles written in English. Results: Thirty-two articles were identified and thoroughly reviewed. Based on our inclusion and exclusion criteria, 14 articles, which included 20 cases of pneumothorax during or after shoulder arthroscopy, were included. Eighty percent (16/20) of pneumothoraxes occurred postoperatively. In the articles that specify the side of pneumothorax, 91% (10/11) occur on the ipsilateral side of the arthroscopy. Eighty-eight percent (7/8) of pneumothoraxes occurred when subacromial decompression was performed. Fifty-six percent (9/16) occurred in patients placed in the lateral decubitus position. Only 30% (6/20) occurred in current or ex-smokers, and only 25% (5/20) had a pre-existing lung condition. Overall, of the articles that posit a mechanism, 75% (9/12) deem the pathogenesis to be multifactorial. Conclusion: The exact mechanism of pneumothorax is currently unknown. Awareness of this complication and timely recognition are important to prevent life-threatening sequelae. Surgeons should have a low threshold to obtain diagnostic plain radiographs in the event of clinical suspicion.

Keywords: rotator cuff repair, decompression, pressure, complication

Procedia PDF Downloads 51
889 Determination of Economic and Ecological Potential of Bio Hydrogen Generated through Dark Photosynthesis Process

Authors: Johannes Full, Martin Reisinger, Alexander Sauer, Robert Miehe

Abstract:

The use of biogenic residues for the biotechnological production of chemical energy carriers for electricity and heat generation as well as for mobile applications is an important lever for the shift away from fossil fuels towards a carbon dioxide neutral post-fossil future. A multitude of promising biotechnological processes needs, therefore, to be compared against each other. For this purpose, a multi-objective target system and a corresponding methodology for the evaluation of the underlying key figures are presented in this paper, which can serve as a basis for decisionmaking for companies and promotional policy measures. The methodology considers in this paper the economic and ecological potential of bio-hydrogen production using the example of hydrogen production from fruit and milk production waste with the purple bacterium R. rubrum (so-called dark photosynthesis process) for the first time. The substrate used in this cost-effective and scalable process is fructose from waste material and waste deposits. Based on an estimation of the biomass potential of such fructose residues, the new methodology is used to compare different scenarios for the production and usage of bio-hydrogen through the considered process. In conclusion, this paper presents, at the example of the promising dark photosynthesis process, a methodology to evaluate the ecological and economic potential of biotechnological production of bio-hydrogen from residues and waste.

Keywords: biofuel, hydrogen, R. rubrum, bioenergy

Procedia PDF Downloads 174
888 A Range of Steel Production in Japan towards 2050

Authors: Reina Kawase

Abstract:

Japan set the goal of 80% reduction in GHG emissions by 2050. To consider countermeasures for reducing GHG emission, the production estimation of energy intensive materials, such as steel, is essential. About 50% of steel production is exported in Japan, so it is necessary to consider steel production including export. Steel productions from 2005-2050 in Japan were estimated under various global assumptions based on combination of scenarios such as goods trade scenarios and steel making process selection scenarios. Process selection scenarios decide volume of steel production by process (basic oxygen furnace and electric arc furnace) with considering steel consumption projection, supply-demand balance of steel, and scrap surplus. The range of steel production by process was analyzed. Maximum steel production was estimated under the scenario which consumes scrap in domestic steel production at maximum level. In 2035, steel production reaches 149 million ton because of increase in electric arc furnace steel. However, it decreases towards 2050 and amounts to 120 million ton, which is almost same as a current level. Minimum steel production is under the scenario which assumes technology progress in steel making and supply-demand balance consideration in each region. Steel production decreases from base year and is 44 million ton in 2050.

Keywords: goods trade scenario, steel making process selection scenario, steel production, global warming

Procedia PDF Downloads 360
887 Degree of Bending in Axially Loaded Tubular KT-Joints of Offshore Structures: Parametric Study and Formulation

Authors: Hamid Ahmadi, Shadi Asoodeh

Abstract:

The fatigue life of tubular joints commonly found in offshore industry is not only dependent on the value of hot-spot stress (HSS), but is also significantly influenced by the through-the-thickness stress distribution characterized by the degree of bending (DoB). The determination of DoB values in a tubular joint is essential for improving the accuracy of fatigue life estimation using the stress-life (S–N) method and particularly for predicting the fatigue crack growth based on the fracture mechanics (FM) approach. In the present paper, data extracted from finite element (FE) analyses of tubular KT-joints, verified against experimental data and parametric equations, was used to investigate the effects of geometrical parameters on DoB values at the crown 0˚, saddle, and crown 180˚ positions along the weld toe of central brace in tubular KT-joints subjected to axial loading. Parametric study was followed by a set of nonlinear regression analyses to derive DoB parametric formulas for the fatigue analysis of KT-joints under axial loads. The tubular KT-joint is a quite common joint type found in steel offshore structures. However, despite the crucial role of the DoB in evaluating the fatigue performance of tubular joints, this paper is the first attempt to study and formulate the DoB values in KT-joints.

Keywords: tubular KT-joint, fatigue, degree of bending (DoB), axial loading, parametric formula

Procedia PDF Downloads 338
886 Estimation of Transition and Emission Probabilities

Authors: Aakansha Gupta, Neha Vadnere, Tapasvi Soni, M. Anbarsi

Abstract:

Protein secondary structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine and biotechnology. Some aspects of protein functions and genome analysis can be predicted by secondary structure prediction. This is used to help annotate sequences, classify proteins, identify domains, and recognize functional motifs. In this paper, we represent protein secondary structure as a mathematical model. To extract and predict the protein secondary structure from the primary structure, we require a set of parameters. Any constants appearing in the model are specified by these parameters, which also provide a mechanism for efficient and accurate use of data. To estimate these model parameters there are many algorithms out of which the most popular one is the EM algorithm or called the Expectation Maximization Algorithm. These model parameters are estimated with the use of protein datasets like RS126 by using the Bayesian Probabilistic method (data set being categorical). This paper can then be extended into comparing the efficiency of EM algorithm to the other algorithms for estimating the model parameters, which will in turn lead to an efficient component for the Protein Secondary Structure Prediction. Further this paper provides a scope to use these parameters for predicting secondary structure of proteins using machine learning techniques like neural networks and fuzzy logic. The ultimate objective will be to obtain greater accuracy better than the previously achieved.

Keywords: model parameters, expectation maximization algorithm, protein secondary structure prediction, bioinformatics

Procedia PDF Downloads 452
885 Flood Predicting in Karkheh River Basin Using Stochastic ARIMA Model

Authors: Karim Hamidi Machekposhti, Hossein Sedghi, Abdolrasoul Telvari, Hossein Babazadeh

Abstract:

Floods have huge environmental and economic impact. Therefore, flood prediction is given a lot of attention due to its importance. This study analysed the annual maximum streamflow (discharge) (AMS or AMD) of Karkheh River in Karkheh River Basin for flood predicting using ARIMA model. For this purpose, we use the Box-Jenkins approach, which contains four-stage method model identification, parameter estimation, diagnostic checking and forecasting (predicting). The main tool used in ARIMA modelling was the SAS and SPSS software. Model identification was done by visual inspection on the ACF and PACF. SAS software computed the model parameters using the ML, CLS and ULS methods. The diagnostic checking tests, AIC criterion, RACF graph and RPACF graphs, were used for selected model verification. In this study, the best ARIMA models for Annual Maximum Discharge (AMD) time series was (4,1,1) with their AIC value of 88.87. The RACF and RPACF showed residuals’ independence. To forecast AMD for 10 future years, this model showed the ability of the model to predict floods of the river under study in the Karkheh River Basin. Model accuracy was checked by comparing the predicted and observation series by using coefficient of determination (R2).

Keywords: time series modelling, stochastic processes, ARIMA model, Karkheh river

Procedia PDF Downloads 271
884 GIS-Based Identification of Overloaded Distribution Transformers and Calculation of Technical Electric Power Losses

Authors: Awais Ahmed, Javed Iqbal

Abstract:

Pakistan has been for many years facing extreme challenges in energy deficit due to the shortage of power generation compared to increasing demand. A part of this energy deficit is also contributed by the power lost in transmission and distribution network. Unfortunately, distribution companies are not equipped with modern technologies and methods to identify and eliminate these losses. According to estimate, total energy lost in early 2000 was between 20 to 26 percent. To address this issue the present research study was designed with the objectives of developing a standalone GIS application for distribution companies having the capability of loss calculation as well as identification of overloaded transformers. For this purpose, Hilal Road feeder in Faisalabad Electric Supply Company (FESCO) was selected as study area. An extensive GPS survey was conducted to identify each consumer, linking it to the secondary pole of the transformer, geo-referencing equipment and documenting conductor sizes. To identify overloaded transformer, accumulative kWH reading of consumer on transformer was compared with threshold kWH. Technical losses of 11kV and 220V lines were calculated using the data from substation and resistance of the network calculated from the geo-database. To automate the process a standalone GIS application was developed using ArcObjects with engineering analysis capabilities. The application uses GIS database developed for 11kV and 220V lines to display and query spatial data and present results in the form of graphs. The result shows that about 14% of the technical loss on both high tension (HT) and low tension (LT) network while about 4 out of 15 general duty transformers were found overloaded. The study shows that GIS can be a very effective tool for distribution companies in management and planning of their distribution network.

Keywords: geographical information system, GIS, power distribution, distribution transformers, technical losses, GPS, SDSS, spatial decision support system

Procedia PDF Downloads 353
883 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation using PINN

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary condition to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful to study various optical phenomena.

Keywords: deep learning, optical Soliton, neural network, partial differential equation

Procedia PDF Downloads 98
882 An Insite to the Probabilistic Assessment of Reserves in Conventional Reservoirs

Authors: Sai Sudarshan, Harsh Vyas, Riddhiman Sherlekar

Abstract:

The oil and gas industry has been unwilling to adopt stochastic definition of reserves. Nevertheless, Monte Carlo simulation methods have gained acceptance by engineers, geoscientists and other professionals who want to evaluate prospects or otherwise analyze problems that involve uncertainty. One of the common applications of Monte Carlo simulation is the estimation of recoverable hydrocarbon from a reservoir.Monte Carlo Simulation makes use of random samples of parameters or inputs to explore the behavior of a complex system or process. It finds application whenever one needs to make an estimate, forecast or decision where there is significant uncertainty. First, the project focuses on performing Monte-Carlo Simulation on a given data set using U. S Department of Energy’s MonteCarlo Software, which is a freeware e&p tool. Further, an algorithm for simulation has been developed for MATLAB and program performs simulation by prompting user for input distributions and parameters associated with each distribution (i.e. mean, st.dev, min., max., most likely, etc.). It also prompts user for desired probability for which reserves are to be calculated. The algorithm so developed and tested in MATLAB further finds implementation in Python where existing libraries on statistics and graph plotting have been imported to generate better outcome. With PyQt designer, codes for a simple graphical user interface have also been written. The graph so plotted is then validated with already available results from U.S DOE MonteCarlo Software.

Keywords: simulation, probability, confidence interval, sensitivity analysis

Procedia PDF Downloads 354
881 Measurement of the Dynamic Modulus of Elasticity of Cylindrical Concrete Specimens Used for the Cyclic Indirect Tensile Test

Authors: Paul G. Bolz, Paul G. Lindner, Frohmut Wellner, Christian Schulze, Joern Huebelt

Abstract:

Concrete, as a result of its use as a construction material, is not only subject to static loads but is also exposed to variables, time-variant, and oscillating stresses. In order to ensure the suitability of construction materials for resisting these cyclic stresses, different test methods are used for the systematic fatiguing of specimens, like the cyclic indirect tensile test. A procedure is presented that allows the estimation of the degradation of cylindrical concrete specimens during the cyclic indirect tensile test by measuring the dynamic modulus of elasticity in different states of the specimens’ fatigue process. Two methods are used in addition to the cyclic indirect tensile test in order to examine the dynamic modulus of elasticity of cylindrical concrete specimens. One of the methods is based on the analysis of eigenfrequencies, whilst the other one uses ultrasonic pulse measurements to estimate the material properties. A comparison between the dynamic moduli obtained using the three methods that operate in different frequency ranges shows good agreement. The concrete specimens’ fatigue process can therefore be monitored effectively and reliably.

Keywords: concrete, cyclic indirect tensile test, degradation, dynamic modulus of elasticity, eigenfrequency, fatigue, natural frequency, ultrasonic, ultrasound, Young’s modulus

Procedia PDF Downloads 152
880 Carbon-Encapsulated Iron Nanoparticles for Hydrogen Sulfide Removal

Authors: Meriem Abid, Erika Oliveria-Jardim, Andres Fullana, Joaquin Silvestre-Albero

Abstract:

The rapid industrial development associated with the increase of volatile organic compounds (VOCs) has seriously impacted the environment. Among VOCs, hydrogen sulfide (H₂S) is known as a highly toxic, malodorous, flammable, and corrosive gas, which is emitted from diverse chemical processes, including industrial waste-gas streams, natural gas processing, and biogas purification. The high toxicity, corrosively, and very characteristic odor threshold of H2S call for urgent development of efficient desulfurization processes from the viewpoint of environmental protection and resource regeneration. In order to reduce H₂S emissions, effective technologies for have been performed. The general method of H₂S removal included amine aqueous solution, adsorption process, biological methods, and fixed-bed solid catalytic oxidation processes. Ecologically and economically, low-temperature direct oxidation of H₂S to elemental sulfur using catalytic oxidation is the preferred approach for removing H₂S-containing gas streams. A large number of catalysts made from carbon, metal oxides, clay, and others, have been studied extensively for this application. In this sense, activated carbon (AC) is an attractive catalyst for H₂S removal because it features a high specific surface area, diverse functional groups, low cost, durability, and high efficiency. It is interesting to stand out that AC is modified using metal oxides to promote the efficiency of H₂S removal and to enhance the catalytic performance. Based on these premises, the main goal of the present study is the evaluation of the H₂S adsorption performance in carbon-encapsulated iron nanoparticles obtained from an olive mill, thermally treated at 600, 800 and 1000 ºC temperatures under anaerobic conditions. These results anticipate that carbon-encapsulated iron nanoparticles exhibit a promising performance for the H₂S removal up to 360 mg/g.

Keywords: H₂S removal, catalytic oxidation, carbon encapsulated iron, olive mill wastewater

Procedia PDF Downloads 64