Search results for: multiple linear models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5270

Search results for: multiple linear models

200 Flood Modeling in Urban Area Using a Well-Balanced Discontinuous Galerkin Scheme on Unstructured Triangular Grids

Authors: Rabih Ghostine, Craig Kapfer, Viswanathan Kannan, Ibrahim Hoteit

Abstract:

Urban flooding resulting from a sudden release of water due to dam-break or excessive rainfall is a serious threatening environment hazard, which causes loss of human life and large economic losses. Anticipating floods before they occur could minimize human and economic losses through the implementation of appropriate protection, provision, and rescue plans. This work reports on the numerical modelling of flash flood propagation in urban areas after an excessive rainfall event or dam-break. A two-dimensional (2D) depth-averaged shallow water model is used with a refined unstructured grid of triangles for representing the urban area topography. The 2D shallow water equations are solved using a second-order well-balanced discontinuous Galerkin scheme. Theoretical test case and three flood events are described to demonstrate the potential benefits of the scheme: (i) wetting and drying in a parabolic basin (ii) flash flood over a physical model of the urbanized Toce River valley in Italy; (iii) wave propagation on the Reyran river valley in consequence of the Malpasset dam-break in 1959 (France); and (iv) dam-break flood in October 1982 at the town of Sumacarcel (Spain). The capability of the scheme is also verified against alternative models. Computational results compare well with recorded data and show that the scheme is at least as efficient as comparable second-order finite volume schemes, with notable efficiency speedup due to parallelization.

Keywords: Flood modeling, dam-break, shallow water equations, Discontinuous Galerkin scheme, MUSCL scheme.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 906
199 Dislocation Modelling of the 1997-2009 High-Precision Global Positioning System Displacements in Darjiling- Sikkim Himalaya, India

Authors: Kutubuddin Ansari, Malay Mukul, Sridevi Jade

Abstract:

We used high-precision Global Positioning System (GPS) to geodetically constrain the motion of stations in the Darjiling-Sikkim Himalayan (DSH) wedge and examine the deformation at the Indian-Tibetan plate boundary using IGS (International GPS Service) fiducial stations. High-precision GPS based displacement and velocity field was measured in the DSH between 1997 and 2009. To obtain additional insight north of the Indo-Tibetan border and in the Darjiling-Sikkim-Tibet (DaSiT) wedge, published velocities from four stations J037, XIGA, J029 and YADO were also included in the analysis. India-fixed velocities or the back-slip was computed relative to the pole of rotation of the Indian Plate (Latitude 52.97 ± 0.22º, Longitude - 0.30 ± 3.76º, and Angular Velocity 0.500 ± 0.008º/ Myr) in the DaSiT wedge. Dislocation modelling was carried out with the back-slip to model the best possible solution of a finite rectangular dislocation or the causative fault based on dislocation theory that produced the observed back-slip using a forward modelling approach. To find the best possible solution, three different models were attempted. First, slip along a single thrust fault, then two thrust faults and in finally, three thrust faults were modelled to simulate the back-slip in the DaSiT wedge. The three-fault case bests the measured displacements and is taken as the best possible solution.

Keywords: Global Positioning System, Darjiling-Sikkim Himalaya, Dislocation modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073
198 Innovative Waste Management Practices in Remote Areas

Authors: Dolores Hidalgo, Jesús M. Martín-Marroquín, Francisco Corona

Abstract:

Municipal waste consist of a variety of items that are everyday discarded by the population. They are usually collected by municipalities and include waste generated by households, commercial activities (local shops) and public buildings. The composition of municipal waste varies greatly from place to place, being mostly related to levels and patterns of consumption, rates of urbanization, lifestyles, and local or national waste management practices. Each year, a huge amount of resources is consumed in the EU, and according to that, also a huge amount of waste is produced. The environmental problems derived from the management and processing of these waste streams are well known, and include impacts on land, water and air. The situation in remote areas is even worst. Difficult access when climatic conditions are adverse, remoteness of centralized municipal treatment systems or dispersion of the population, are all factors that make remote areas a real municipal waste treatment challenge. Furthermore, the scope of the problem increases significantly because the total lack of awareness of the existing risks in this area together with the poor implementation of advanced culture on waste minimization and recycling responsibly. The aim of this work is to analyze the existing situation in remote areas in reference to the production of municipal waste and evaluate the efficiency of different management alternatives. Ideas for improving waste management in remote areas include, for example: the implementation of self-management systems for the organic fraction; establish door-to-door collection models; promote small-scale treatment facilities or adjust the rates of waste generation thereof.

Keywords: Door to door collection, islands, isolated areas, municipal waste, remote areas, rural communities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2111
197 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1144
196 Toxicological and Histopathological Studies on the Effect of Tartrazine in Male Albino Rats

Authors: F. Alaa Ali, S. A. Sherein Abdelgayed, S. Osama. EL-Tawil, M. Adel Bakeer

Abstract:

Tartrazine is an organic azo dyes food additive widely used in foods, drugs, and cosmetics. The present study aimed to investigate the toxic effects of tartrazine on kidneys and liver biomarkers in addition to the investigation of oxidative stress and change of histopathological structure of liver and kidneys in 30 male rats. Tartrazine was orally administrated daily at dose 200 mg/ kg bw (1/ 10 LD50) for sixty days. Serum and tissue samples were collected at the end of the experiment to investigate the underlying mechanism of tartrazine through assessment oxidative stress (Glutathione (GSH), Superoxide dismutase (SOD) and malondialdehyde (MDA) and biochemical markers (alanine aminotransferase (ALT), aspartate aminotransferase (AST), Total protein and Urea). Liver and kidneys tissue were collected and preserved in 10% formalin for histopathological examination. The obtained values were statistically analyzed by one way analysis of variance (ANOVA) followed by multiple comparison test. Biochemical analysis revealed that tartrazine induced significant increase in serum ALT, AST, total protein, urea level compared to control group. Tartrazine showed significant decrease in liver GSH and SOD where their values when compared to control group. Tartrazine induced increase in liver MDA compared to control group. Histopathology of the liver showed diffuse vacuolar degeneration in hepatic parenchyma, the portal area showed sever changes sever in hepatoportal blood vessels and in the bile ducts. The kidneys showed degenerated tubules at the cortex together with mononuclear leucocytes inflammatory cells infiltration. There is perivascular edema with inflammatory cell infiltration surrounding the congested and hyalinized vascular wall of blood vessel. The present study indicates that the subchronic effects of tartrazine have a toxic effect on the liver and kidneys together with induction of oxidative stress by formation of free radicals. Therefore, people should avoid the hazards of consuming tartrazine.

Keywords: Albino rats, tartrazine, toxicity, pathology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2159
195 Exergy Based Performance Analysis of a Gas Turbine Unit at Various Ambient Conditions

Authors: Idris A. Elfeituri

Abstract:

This paper studies the effect of ambient conditions on the performance of a 285 MW gas turbine unit using the exergy concept. Based on the available exergy balance models developed, a computer program has been constructed to investigate the performance of the power plant under varying ambient temperature and relative humidity conditions. The variations of ambient temperature range from zero to 50 ºC and the relative humidity ranges from zero to 100%, while the unit load kept constant at 100% of the design load. The exergy destruction ratio and exergy efficiency are determined for each component and for the entire plant. The results show a moderate increase in the total exergy destruction ratio of the plant from 62.05% to 65.20%, while the overall exergy efficiency decrease from 38.2% to 34.8% as the ambient temperature increases from zero to 50 ºC at all relative humidity values. Furthermore, an increase of 1 ºC in ambient temperature leads to 0.063% increase in the total exergy destruction ratio and 0.07% decrease in the overall exergy efficiency. The relative humidity has a remarkable influence at higher ambient temperature values on the exergy destruction ratio of combustion chamber and on exergy loss ratio of the exhaust gas but almost no effect on the total exergy destruction ratio and overall exergy efficiency. At 50 ºC ambient temperature, the exergy destruction ratio of the combustion chamber increases from 30% to 52% while the exergy loss ratio of the exhaust gas decreases from 28% to 8% as the relative humidity increases from zero to 100%. In addition, exergy analysis reveals that the combustion chamber and exhaust gas are the main source of irreversibility in the gas turbine unit. It is also identified that the exergy efficiency and exergy destruction ratio are considerably dependent on the variations in the ambient air temperature and relative humidity. Therefore, the incorporation of the existing gas turbine plant with inlet air cooling and humidifier technologies should be considered seriously.

Keywords: Destruction, exergy, gas turbine, irreversibility, performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 869
194 Design Development of Floating Performance Structure for Coastal Areas in the Maltese Islands

Authors: Rebecca E. Dalli Gonzi, Joseph Falzon

Abstract:

Background: Islands in the Mediterranean region offer opportunities for various industries to take advantage of the facilitation and use of versatile floating structures in coastal areas. In the context of dense land use, marine structures can contribute to ensure both terrestrial and marine resource sustainability. Objective: The aim of this paper is to present and critically discuss an array of issues that characterize the design process of a floating structure for coastal areas and to present the challenges and opportunities of providing such multifunctional and versatile structures around the Maltese coastline. Research Design: A three-tier research design commenced with a systematic literature review. Semi-structured interviews with stakeholders including a naval architect, a marine engineer and civil designers were conducted. A second stage preceded a focus group with stakeholders in design and construction of marine lightweight structures. The three tier research design ensured triangulation of issues. All phases of the study were governed by research ethics. Findings: Findings were grouped into three main themes: excellence, impact and implementation. These included design considerations, applications and potential impacts on local industry. Literature for the design and construction of marine structures in the Maltese Islands presented multiple gaps in the application of marine structures for local industries. Weather conditions, depth of sea bed and wave actions presented limitations on the design capabilities of the structure. Conclusion: Water structures offer great potential and conclusions demonstrate the applicability of such designs for Maltese waters. There is still no such provision within Maltese coastal areas for multi-purpose use. The introduction of such facilities presents a range of benefits for visiting tourists and locals thereby offering wide range of services to tourism and marine industry. Costs for construction and adverse weather conditions were amongst the main limitations that shaped design capacities of the water structures.

Keywords: Coastal areas, lightweight, marine structure, multipurpose, versatile, floating device.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 899
193 Customer Churn Prediction Using Four Machine Learning Algorithms Integrating Feature Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial part of maintaining a customer-oriented business in the telecommunications industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years, which has made it more important to understand customers’ needs in this strong market. For those who are looking to turn over their service providers, understanding their needs is especially important. Predictive churn is now a mandatory requirement for retaining customers in the telecommunications industry. Machine learning can be used to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: Machine Learning, Gradient Boosting, Logistic Regression, Churn, Random Forest, Decision Tree, ROC, AUC, F1-score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 350
192 Feature Point Reduction for Video Stabilization

Authors: Theerawat Songyot, Tham Manjing, Bunyarit Uyyanonvara, Chanjira Sinthanayothin

Abstract:

Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.

Keywords: background object tracking, feature point reduction, low cost tracking, video stabilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
191 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: Multi-carrier frequency diverse array, adaptive beamforming, correction index, limited snapshot, robust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 635
190 Analysis of Career Support Programs for Olympic Athletes in Japan with Fifteen Conceptual Categories

Authors: Miyako Oulevey, Kaori Tsutsui, David Lavallee, Naohiko Kohtake

Abstract:

The Japan Sports Agency has made efforts to unify several career support programs for Olympic athletes prior to the 2020 Tokyo Olympics. One of the programs, the Japan Olympic Committee Career Academy (JCA) was established in 2008 for Olympic athletes at their retirement. Research focusing on the service content of sport career support programs can help athletes experience a more positive transition. This study was designed to investigate the service content of the JCA program in relation to athletes’ career transition needs, including any differences of the reasons for retirement between Summer/Winter and Male/Female Olympic athletes, and to suggest the directions of how to unify the career support programs in Japan after hosting the Olympic Games using sport career transition models. Semi-structured interviews were conducted and analyzed the JCA director who started and managed the program since its inception, and a total of 15 conceptual categories were generated by the analysis. Four conceptual categories were in the result of “JCA situation”, 4 conceptual categories were in the result of “Athletes using JCA”, and 7 conceptual categories were in the result of “JCA current difficulties”. Through the analysis it was revealed that: the JCA had occupational supports for both current and retired Olympic athletes; other supports such as psychological support were unclear due to the lack of psychological professionals in JCA and the difficulties collaborating with other sports organizations; and there are differences in tendencies of visiting JCA, financial situations, and career choices depending on Summer/Winter and Male/Female athletes.

Keywords: Career support programs, causes of career termination, Olympic athlete, Olympic committee.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 796
189 Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model

Authors: B. L. Ho, L. Shi, D. F. Wang, V. C. T. Mok

Abstract:

The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.

Keywords: Functional magnetic resonance imaging, multivariate regression, network hubs, resting state functional connectivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773
188 Miniaturized PVC Sensors for Determination of Fe2+, Mn2+ and Zn2+ in Buffalo-Cows’ Cervical Mucus Samples

Authors: Ahmed S. Fayed, Umima M. Mansour

Abstract:

Three polyvinyl chloride membrane sensors were developed for the electrochemical evaluation of ferrous, manganese and zinc ions. The sensors were used for assaying metal ions in cervical mucus (CM) of Egyptian river buffalo-cows (Bubalus bubalis) as their levels vary dependent on cyclical hormone variation during different phases of estrus cycle. The presented sensors are based on using ionophores, β-cyclodextrin (β-CD), hydroxypropyl β-cyclodextrin (HP-β-CD) and sulfocalix-4-arene (SCAL) for sensors 1, 2 and 3 for Fe2+, Mn2+ and Zn2+, respectively. Dioctyl phthalate (DOP) was used as the plasticizer in a polymeric matrix of polyvinylchloride (PVC). For increasing the selectivity and sensitivity of the sensors, each sensor was enriched with a suitable complexing agent, which enhanced the sensor’s response. For sensor 1, β-CD was mixed with bathophenanthroline; for sensor 2, porphyrin was incorporated with HP-β-CD; while for sensor 3, oxine was the used complexing agent with SCAL. Linear responses of 10-7-10-2 M with cationic slopes of 53.46, 45.01 and 50.96 over pH range 4-8 were obtained using coated graphite sensors for ferrous, manganese and zinc ionic solutions, respectively. The three sensors were validated, according to the IUPAC guidelines. The obtained results by the presented potentiometric procedures were statistically analyzed and compared with those obtained by atomic absorption spectrophotometric method (AAS). No significant differences for either accuracy or precision were observed between the two techniques. Successful application for the determination of the three studied cations in CM, for the purpose to determine the proper time for artificial insemination (AI) was achieved. The results were compared with those obtained upon analyzing the samples by AAS. Proper detection of estrus and correct time of AI was necessary to maximize the production of buffaloes. In this experiment, 30 multi-parous buffalo-cows were in second to third lactation and weighting 415-530 kg, and were synchronized with OVSynch protocol. Samples were taken in three times around ovulation, on day 8 of OVSynch protocol, on day 9 (20 h before AI) and on day 10 (1 h before AI). Beside analysis of trace elements (Fe2+, Mn2+ and Zn2+) in CM using the three sensors, the samples were analyzed for the three cations and also Cu2+ by AAS in the CM samples and blood samples. The results obtained were correlated with hormonal analysis of serum samples and ultrasonography for the purpose of determining of the optimum time of AI. The results showed significant differences and powerful correlation with Zn2+ composition of CM during heat phase and the ovulation time, indicating that the parameter could be used as a tool to decide optimal time of AI in buffalo-cows.

Keywords: PVC sensors, buffalo-cows, cyclodextrins, atomic absorption spectrophotometry, artificial insemination, OVSynch protocol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1228
187 Tactile Sensory Digit Feedback for Cochlear Implant Electrode Insertion

Authors: Yusuf Bulale, Mark Prince, Geoff Tansley, Peter Brett

Abstract:

Cochlear Implantation (CI) which became a routine procedure for the last decades is an electronic device that provides a sense of sound for patients who are severely and profoundly deaf. The optimal success of this implantation depends on the electrode technology and deep insertion techniques. However, this manual insertion procedure may cause mechanical trauma which can lead to severe destruction of the delicate intracochlear structure. Accordingly, future improvement of the cochlear electrode implant insertion needs reduction of the excessive force application during the cochlear implantation which causes tissue damage and trauma. This study is examined tool-tissue interaction of large prototype scale digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale cochlea phantom for simulating the human cochlear which could lead to small scale digit requirements. The digit, distributive tactile sensors embedded with silicon-substrate was inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit have provided tactile information from the digitphantom insertion interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The tests demonstrated that even devices of such a relative simple design with low cost have potential to improve cochlear implant surgery and other lumen mapping applications by providing tactile sensory feedback information and thus controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied to other minimally invasive surgery applications as well as diagnosis and path navigation procedures.

Keywords: Cochlear electrode insertion, distributive tactile sensory feedback information, flexible digit, minimally invasive surgery, tool/tissue interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2136
186 Space Telemetry Anomaly Detection Based on Statistical PCA Algorithm

Authors: B. Nassar, W. Hussein, M. Mokhtar

Abstract:

The critical concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission, but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the problem above coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions, and the results show that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.

Keywords: Space telemetry monitoring, multivariate analysis, PCA algorithm, space operations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030
185 On the Factors Affecting Computing Students’ Awareness of the Latest ICTs

Authors: O. D. Adegbehingbe, S. D. Eyono Obono

Abstract:

The education sector is constantly faced with rapid changes in technologies in terms of ensuring that the curriculum is up to date and in terms of making sure that students are aware of these technological changes. This challenge can be seen as the motivation for this study, which is to examine the factors affecting computing students’ awareness of the latest Information Technologies (ICTs). The aim of this study is divided into two sub-objectives which are: the selection of relevant theories and the design of a conceptual model to support it as well as the empirical testing of the designed model. The first objective is achieved by a review of existing literature on technology adoption theories and models. The second objective is achieved using a survey of computing students in the four universities of the KwaZulu-Natal province of South Africa. Data collected from this survey is analyzed using Statistical package for the Social Science (SPSS) using descriptive statistics, ANOVA and Pearson correlations. The main hypothesis of this study is that there is a relationship between the demographics and the prior conditions of the computing students and their awareness of general ICT trends and of Digital Switch Over (DSO) a new technology which involves the change from analog to digital television broadcasting in order to achieve improved spectrum efficiency. The prior conditions of the computing students that were considered in this study are students’ perceived exposure to career guidance and students’ perceived curriculum currency. The results of this study confirm that gender, ethnicity, and high school computing course affect students’ perceived curriculum currency while high school location affects students’ awareness of DSO. The results of this study also confirm that there is a relationship between students prior conditions and their awareness of general ICT trends and DSO in particular.

Keywords: Education, Information Technologies, IDT, awareness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2199
184 Modeling and FOS Feedback Based Control of SISO Intelligent Structures with Embedded Shear Sensors and Actuators

Authors: T. C. Manjunath, B. Bandyopadhyay

Abstract:

Active vibration control is an important problem in structures. The objective of active vibration control is to reduce the vibrations of a system by automatic modification of the system-s structural response. In this paper, the modeling and design of a fast output sampling feedback controller for a smart flexible beam system embedded with shear sensors and actuators for SISO system using Timoshenko beam theory is proposed. FEM theory, Timoshenko beam theory and the state space techniques are used to model the aluminum cantilever beam. For the SISO case, the beam is divided into 5 finite elements and the control actuator is placed at finite element position 1, whereas the sensor is varied from position 2 to 5, i.e., from the nearby fixed end to the free end. Controllers are designed using FOS method and the performance of the designed FOS controller is evaluated for vibration control for 4 SISO models of the same plant. The effect of placing the sensor at different locations on the beam is observed and the performance of the controller is evaluated for vibration control. Some of the limitations of the Euler-Bernoulli theory such as the neglection of shear and axial displacement are being considered here, thus giving rise to an accurate beam model. Embedded shear sensors and actuators have been considered in this paper instead of the surface mounted sensors and actuators for vibration suppression because of lot of advantages. In controlling the vibration modes, the first three dominant modes of vibration of the system are considered.

Keywords: Smart structure, Timoshenko beam theory, Fast output sampling feedback control, Finite Element Method, State space model, SISO, Vibration control, LMI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
183 Acceptance of Health Information Application in Smart National Identity Card (SNIC) Using a New I-P Framework

Authors: Ismail Bile Hassan, Masrah Azrifah Azmi Murad

Abstract:

This study discovers a novel framework of individual level technology adoption known as I-P (Individual- Privacy) towards health information application in Smart National Identity Card. Many countries introduced smart national identity card (SNIC) with various applications such as health information application embedded inside it. However, the degree to which citizens accept and use some of the embedded applications in smart national identity remains unknown to many governments and application providers as well. Moreover, the factors of trust, perceived risk, Privacy concern and perceived credibility need to be incorporated into more comprehensive models such as extended Unified Theory of Acceptance and Use of Technology known as UTAUT2. UTAUT2 is a mainly widespread and leading theory up to now. This research identifies factors affecting the citizens’ behavioural intention to use health information application embedded in SNIC and extends better understanding on the relevant factors that the government and the application providers would need to consider in predicting citizens’ new technology acceptance in the future. We propose a conceptual framework by combining the UTAUT2 and Privacy Calculus Model constructs and also adding perceived credibility as a new variable. The proposed framework may provide assistance to any government planning, decision, and policy makers involving e-government projects. Empirical study may be conducted in the future to provide proof and empirically validate this I-P framework.

Keywords: Unified Theory of Acceptance and Use of Technology (UTAUT) model, UTAUT2 model, Smart National Identity Card (SNIC), Health information application, Privacy Calculus Model (PCM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2976
182 Attitudes of Gratitude: An Analysis of 30 Cancer Narratives Published by Leading U.S. Cancer Care Centers

Authors: Maria L. McLeod

Abstract:

This study examines the ways in which cancer patient narratives are portrayed and framed on the websites of three leading U.S. cancer care centers – The University of Texas MD Anderson Cancer Center in Houston, Memorial Sloan Kettering Cancer Center in New York, and Seattle Cancer Care Alliance. Thirty patient stories, 10 from each cancer center website blog, were analyzed using qualitative and quantitative textual analysis of unstructured data, documenting common themes and other elements of story structure and content. Patient narratives were coded using grounded theory as the basis for conducting emergent qualitative research. As part of a systematic, inductive approach to collecting and analyzing data, recurrent and unique themes were examined and compared in terms of positive and negative framing, patient agency, and institutional praise. All three of these cancer care centers are teaching hospitals, with university affiliations, that emphasize an evidence-based scientific approach to treatment that utilizes the latest research and cutting-edge techniques and technology. The featured cancer stories suggest positive outcomes based on anecdotal narratives as opposed to the science-based treatment models employed by the cancer centers. An analysis of 30 sample stories found skewed representation of the “cancer experience” that emphasizes positive outcomes while minimizing or excluding more negative realities of cancer diagnosis and treatment. The stories also deemphasize patient agency, instead focusing on deference and gratitude toward the cancer care centers, which are cast in the role of savior.  

Keywords: Cancer framing, cancer narratives, survivor stories, patient narratives.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 506
181 Improved Dynamic Bayesian Networks Applied to Arabic on Line Characters Recognition

Authors: Redouane Tlemsani, Abdelkader Benyettou

Abstract:

Work is in on line Arabic character recognition and the principal motivation is to study the Arab manuscript with on line technology.

This system is a Markovian system, which one can see as like a Dynamic Bayesian Network (DBN). One of the major interests of these systems resides in the complete models training (topology and parameters) starting from training data.

Our approach is based on the dynamic Bayesian Networks formalism. The DBNs theory is a Bayesians networks generalization to the dynamic processes. Among our objective, amounts finding better parameters, which represent the links (dependences) between dynamic network variables.

In applications in pattern recognition, one will carry out the fixing of the structure, which obliges us to admit some strong assumptions (for example independence between some variables). Our application will relate to the Arabic isolated characters on line recognition using our laboratory database: NOUN. A neural tester proposed for DBN external optimization.

The DBN scores and DBN mixed are respectively 70.24% and 62.50%, which lets predict their further development; other approaches taking account time were considered and implemented until obtaining a significant recognition rate 94.79%.

Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
180 Evolutionary of Prostate Cancer Stem Cells in Prostate Duct

Authors: Zachariah Sinkala

Abstract:

A systems approach model for prostate cancer in prostate duct, as a sub-system of the organism is developed. It is accomplished in two steps. First this research work starts with a nonlinear system of coupled Fokker-Plank equations which models continuous process of the system like motion of cells. Then extended to PDEs that include discontinuous processes like cell mutations, proliferation and deaths. The discontinuous processes is modeled by using intensity poisson processes. The model incorporates the features of the prostate duct. The system of PDEs spatial coordinate is along the proximal distal axis. Its parameters depend on features of the prostate duct. The movement of cells is biased towards distal region and mutations of prostate cancer cells is localized in the proximal region. Numerical solutions of the full system of equations are provided, and are exhibit traveling wave fronts phenomena. This motivates the use of the standard transformation to derive a canonically related system of ODEs for traveling wave solutions. The results obtained show persistence of prostate cancer by showing that the non-negative cone for the traveling wave system is time invariant. The traveling waves have a unique global attractor is proved also. Biologically, the global attractor verifies that evolution of prostate cancer stem cells exhibit the avascular tumor growth. These numerical solutions show that altering prostate stem cell movement or mutation of prostate cancer cells lead to avascular tumor. Conclusion with comments on clinical implications of the model is discussed.

Keywords: Fokker-Plank equations, global attractor, stem cell.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1866
179 Progressive AAM Based Robust Face Alignment

Authors: Daehwan Kim, Jaemin Kim, Seongwon Cho, Yongsuk Jang, Sun-Tae Chung, Boo-Gyoun Kim

Abstract:

AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In case the initial values are a little far distant from the global optimum values, there exists a pretty good possibility that AAM-based face alignment may converge to a local minimum. In this paper, we propose a progressive AAM-based face alignment algorithm which first finds the feature parameter vector fitting the inner facial feature points of the face and later localize the feature points of the whole face using the first information. The proposed progressive AAM-based face alignment algorithm utilizes the fact that the feature points of the inner part of the face are less variant and less affected by the background surrounding the face than those of the outer part (like the chin contour). The proposed algorithm consists of two stages: modeling and relation derivation stage and fitting stage. Modeling and relation derivation stage first needs to construct two AAM models: the inner face AAM model and the whole face AAM model and then derive relation matrix between the inner face AAM parameter vector and the whole face AAM model parameter vector. In the fitting stage, the proposed algorithm aligns face progressively through two phases. In the first phase, the proposed algorithm will find the feature parameter vector fitting the inner facial AAM model into a new input face image, and then in the second phase it localizes the whole facial feature points of the new input face image based on the whole face AAM model using the initial parameter vector estimated from using the inner feature parameter vector obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment algorithm is more robust with respect to pose, illumination, and face background than the conventional basic AAM-based face alignment algorithm.

Keywords: Face Alignment, AAM, facial feature detection, model matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1614
178 Social Work Practice to Labour Welfare: A Proposed Model of Field Work Practicum and Role of Social Worker in India

Authors: Naeem Ahmed

Abstract:

Social work is a professional activity based on the approach of “helping people to help themselves” (Stroup). Social work education and practice both are based on humanitarian philosophy in which social workers try to increase the happiness of the society and to reduce the problems of society. Labour welfare is a specialised field of social work which especially focuses on welfare of organised and unorganised labour. In India labour is facing numerous problems in both organised and unorganised sectors because of ignorance, illiteracy, high rate of unemployment etc. In most of the Indian social work institutions we have this specialization with different names like Human Resource Management or Industrial Relation and Personnel Management or Industrial Relations and Labour Welfare or Industrial Social Work etc. Field work practice is integrated part of social work education curriculum in all specialised field. In India we have different field work practice models being followed in different institutions. The main objective of this paper is to prepare a universal field work practicum model in the field of labour welfare. This paper is exploratory in nature, researcher used personal experience and secondary data (model of field work practice in different institutions like Aligarh Muslim University, Pondicherry University, Central University of Karnataka, University of Lucknow, MJP Rohilkhand University Bareilly etc.) Researcher found that there is an immediate need to upgrade the curriculum or field work practice in this particular field, as more than 40 percent of total population engaged in either unorganised or organised sector (NSSO 2011-12) and they are not aware about their rights. In this way a social worker can play an important role in existing labour welfare facilities by making them aware.

Keywords: Fieldwork, labour welfare, organised labour, social work practice, unorganised labour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2803
177 Controller Design for Euler-Bernoulli Smart Structures Using Robust Decentralized FOS via Reduced Order Modeling

Authors: T.C. Manjunath, B. Bandyopadhyay

Abstract:

This paper features the modeling and design of a Robust Decentralized Fast Output Sampling (RDFOS) Feedback control technique for the active vibration control of a smart flexible multimodel Euler-Bernoulli cantilever beams for a multivariable (MIMO) case by retaining the first 6 vibratory modes. The beam structure is modeled in state space form using the concept of piezoelectric theory, the Euler-Bernoulli beam theory and the Finite Element Method (FEM) technique by dividing the beam into 4 finite elements and placing the piezoelectric sensor / actuator at two finite element locations (positions 2 and 4) as collocated pairs, i.e., as surface mounted sensor / actuator, thus giving rise to a multivariable model of the smart structure plant with two inputs and two outputs. Five such multivariable models are obtained by varying the dimensions (aspect ratios) of the aluminium beam. Using model order reduction technique, the reduced order model of the higher order system is obtained based on dominant Eigen value retention and the Davison technique. RDFOS feedback controllers are designed for the above 5 multivariable-multimodel plant. The closed loop responses with the RDFOS feedback gain and the magnitudes of the control input are obtained and the performance of the proposed multimodel smart structure system is evaluated for vibration control.

Keywords: Smart structure, Euler-Bernoulli beam theory, Fastoutput sampling feedback control, Finite Element Method, Statespace model, Vibration control, LMI, Model order Reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721
176 Evaluation of the Weight-Based and Fat-Based Indices in Relation to Basal Metabolic Rate-to-Weight Ratio

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Basal metabolic rate is questioned as a risk factor for weight gain. The relations between basal metabolic rate and body composition have not been cleared yet. The impact of fat mass on basal metabolic rate is also uncertain. Within this context, indices based upon total body mass as well as total body fat mass are available. In this study, the aim is to investigate the potential clinical utility of these indices in the adult population. 287 individuals, aged from 18 to 79 years, were included into the scope of the study. Based upon body mass index values, 10 underweight, 88 normal, 88 overweight, 81 obese, and 20 morbid obese individuals participated. Anthropometric measurements including height (m), and weight (kg) were performed. Body mass index, diagnostic obesity notation model assessment index I, diagnostic obesity notation model assessment index II, basal metabolic rate-to-weight ratio were calculated. Total body fat mass (kg), fat percent (%), basal metabolic rate, metabolic age, visceral adiposity, fat mass of upper as well as lower extremities and trunk, obesity degree were measured by TANITA body composition monitor using bioelectrical impedance analysis technology. Statistical evaluations were performed by statistical package (SPSS) for Windows Version 16.0. Scatterplots of individual measurements for the parameters concerning correlations were drawn. Linear regression lines were displayed. The statistical significance degree was accepted as p < 0.05. The strong correlations between body mass index and diagnostic obesity notation model assessment index I as well as diagnostic obesity notation model assessment index II were obtained (p < 0.001). A much stronger correlation was detected between basal metabolic rate and diagnostic obesity notation model assessment index I in comparison with that calculated for basal metabolic rate and body mass index (p < 0.001). Upon consideration of the associations between basal metabolic rate-to-weight ratio and these three indices, the best association was observed between basal metabolic rate-to-weight and diagnostic obesity notation model assessment index II. In a similar manner, this index was highly correlated with fat percent (p < 0.001). Being independent of the indices, a strong correlation was found between fat percent and basal metabolic rate-to-weight ratio (p < 0.001). Visceral adiposity was much strongly correlated with metabolic age when compared to that with chronological age (p < 0.001). In conclusion, all three indices were associated with metabolic age, but not with chronological age. Diagnostic obesity notation model assessment index II values were highly correlated with body mass index values throughout all ranges starting with underweight going towards morbid obesity. This index is the best in terms of its association with basal metabolic rate-to-weight ratio, which can be interpreted as basal metabolic rate unit.

Keywords: Basal metabolic rate, body mass index, children, diagnostic obesity notation model assessment index, obesity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 986
175 CFD Study of Subcooled Boiling Flow at Elevated Pressure Using a Mechanistic Wall Heat Partitioning Model

Authors: Machimontorn Promtong, Sherman C. P. Cheung, Guan H. Yeoh, Sara Vahaji, Jiyuan Tu

Abstract:

The wide range of industrial applications involved with boiling flows promotes the necessity of establishing fundamental knowledge in boiling flow phenomena. For this purpose, a number of experimental and numerical researches have been performed to elucidate the underlying physics of this flow. In this paper, the improved wall boiling models, implemented on ANSYS CFX 14.5, were introduced to study subcooled boiling flow at elevated pressure. At the heated wall boundary, the Fractal model, Force balance approach and Mechanistic frequency model are given for predicting the nucleation site density, bubble departure diameter, and bubble departure frequency. The presented wall heat flux partitioning closures were modified to consider the influence of bubble sliding along the wall before the lift-off, which usually happens in the flow boiling. The simulation was performed based on the Two-fluid model, where the standard k-ω SST model was selected for turbulence modelling. Existing experimental data at around 5 bars were chosen to evaluate the accuracy of the presented mechanistic approach. The void fraction and Interfacial Area Concentration (IAC) are in good agreement with the experimental data. However, the predicted bubble velocity and Sauter Mean Diameter (SMD) are over-predicted. This over-prediction may be caused by consideration of only dispersed and spherical bubbles in the simulations. In the future work, the important physical mechanisms of bubbles, such as merging and shrinking during sliding on the heated wall will be incorporated into this mechanistic model to enhance its capability for a wider range of flow prediction.

Keywords: CFD, mechanistic model, subcooled boiling flow, two-fluid model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1228
174 Numerical Analysis of Laminar Reflux Condensation from Gas-Vapour Mixtures in Vertical Parallel Plate Channels

Authors: Foad Hassaninejadafarahani, Scott Ormiston

Abstract:

Reflux condensation occurs in vertical channels and tubes when there is an upward core flow of vapour (or gas-vapour mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapour-gas mixture (or pure vapour) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapour core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces a sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on finite volume method and co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and gas mass fraction profiles, as well as axial variations of film thickness.

Keywords: Reflux Condensation, Heat Transfer, Channel, Laminar Flow

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
173 LAYMOD; A Layered and Modular Platform for CAx Collaboration Management and Supporting Product data Integration based on STEP Standard

Authors: Omid F. Valilai, Mahmoud Houshmand

Abstract:

Nowadays companies strive to survive in a competitive global environment. To speed up product development/modifications, it is suggested to adopt a collaborative product development approach. However, despite the advantages of new IT improvements still many CAx systems work separately and locally. Collaborative design and manufacture requires a product information model that supports related CAx product data models. To solve this problem many solutions are proposed, which the most successful one is adopting the STEP standard as a product data model to develop a collaborative CAx platform. However, the improvement of the STEP-s Application Protocols (APs) over the time, huge number of STEP AP-s and cc-s, the high costs of implementation, costly process for conversion of older CAx software files to the STEP neutral file format; and lack of STEP knowledge, that usually slows down the implementation of the STEP standard in collaborative data exchange, management and integration should be considered. In this paper the requirements for a successful collaborative CAx system is discussed. The STEP standard capability for product data integration and its shortcomings as well as the dominant platforms for supporting CAx collaboration management and product data integration are reviewed. Finally a platform named LAYMOD to fulfil the requirements of CAx collaborative environment and integrating the product data is proposed. The platform is a layered platform to enable global collaboration among different CAx software packages/developers. It also adopts the STEP modular architecture and the XML data structures to enable collaboration between CAx software packages as well as overcoming the STEP standard limitations. The architecture and procedures of LAYMOD platform to manage collaboration and avoid contradicts in product data integration are introduced.

Keywords: CAx, Collaboration management, STEP applicationmodules, STEP standard, XML data structures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2188
172 Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain

Authors: Suman Senapati, Goutam Saha

Abstract:

Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features.

Keywords: Speaker Identification, Log Gabor Wavelet, Bayesian Bivariate Estimator, Circularly Symmetric Probability Density Function, SIRP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619
171 Selection of Strategic Suppliers for Partnership: A Model with Two Stages Approach

Authors: Safak Isik, Ozalp Vayvay

Abstract:

Strategic partnerships with suppliers play a vital role for the long-term value-based supply chain. This strategic collaboration keeps still being one of the top priority of many business organizations in order to create more additional value; benefiting mainly from supplier’s specialization, capacity and innovative power, securing supply and better managing costs and quality. However, many organizations encounter difficulties in initiating, developing and managing those partnerships and many attempts result in failures. One of the reasons for such failure is the incompatibility of members of this partnership or in other words wrong supplier selection which emphasize the significance of the selection process since it is the beginning stage. An effective selection process of strategic suppliers is critical to the success of the partnership. Although there are several research studies to select the suppliers in literature, only a few of them is related to strategic supplier selection for long-term partnership. The purpose of this study is to propose a conceptual model for the selection of strategic partnership suppliers. A two-stage approach has been used in proposed model incorporating first segmentation and second selection. In the first stage; considering the fact that not all suppliers are strategically equal and instead of a long list of potential suppliers, Kraljic’s purchasing portfolio matrix can be used for segmentation. This supplier segmentation is the process of categorizing suppliers based on a defined set of criteria in order to identify types of suppliers and determine potential suppliers for strategic partnership. In the second stage, from a pool of potential suppliers defined at first phase, a comprehensive evaluation and selection can be performed to finally define strategic suppliers considering various tangible and intangible criteria. Since a long-term relationship with strategic suppliers is anticipated, criteria should consider both current and future status of the supplier. Based on an extensive literature review; strategical, operational and organizational criteria have been determined and elaborated. The result of the selection can also be used to determine suppliers who are not ready for a partnership but to be developed for strategic partnership. Since the model is based on multiple criteria for both stages, it provides a framework for further utilization of Multi-Criteria Decision Making (MCDM) techniques. The model may also be applied to a wide range of industries and involve managerial features in business organizations.

Keywords: Kraljic’s matrix, purchasing portfolio, strategic supplier selection, supplier collaboration, supplier partnership, supplier segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1125