Search results for: decreasing run time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6789

Search results for: decreasing run time

4479 Event Information Extraction System (EIEE): FSM vs HMM

Authors: Shaukat Wasi, Zubair A. Shaikh, Sajid Qasmi, Hussain Sachwani, Rehman Lalani, Aamir Chagani

Abstract:

Automatic Extraction of Event information from social text stream (emails, social network sites, blogs etc) is a vital requirement for many applications like Event Planning and Management systems and security applications. The key information components needed from Event related text are Event title, location, participants, date and time. Emails have very unique distinctions over other social text streams from the perspective of layout and format and conversation style and are the most commonly used communication channel for broadcasting and planning events. Therefore we have chosen emails as our dataset. In our work, we have employed two statistical NLP methods, named as Finite State Machines (FSM) and Hidden Markov Model (HMM) for the extraction of event related contextual information. An application has been developed providing a comparison among the two methods over the event extraction task. It comprises of two modules, one for each method, and works for both bulk as well as direct user input. The results are evaluated using Precision, Recall and F-Score. Experiments show that both methods produce high performance and accuracy, however HMM was good enough over Title extraction and FSM proved to be better for Venue, Date, and time.

Keywords: Emails, Event Extraction, Event Detection, Finite state machines, Hidden Markov Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2297
4478 Ranking of Inventory Policies Using Distance Based Approach Method

Authors: Gupta Amit, Kumar Ramesh, Tewari P. C.

Abstract:

Globalization is putting enormous pressure on the business organizations specially manufacturing one to rethink the supply chain in innovative manners. Inventory consumes major portion of total sale revenue. Effective and efficient inventory management plays a vital role for the successful functioning of any organization. Selection of inventory policy is one of the important purchasing activities. This paper focuses on selection and ranking of alternative inventory policies. A deterministic quantitative model based on Distance Based Approach (DBA) method has been developed for evaluation and ranking of inventory policies. We have employed this concept first time for this type of the selection problem. Four inventory policies economic order quantity (EOQ), just in time (JIT), vendor managed inventory (VMI) and monthly policy are considered. Improper selection could affect a company’s competitiveness in terms of the productivity of its facilities and quality of its products. The ranking of inventory policies is a multi-criteria problem. There is a need to first identify the selection criteria and then processes the information with reference to relative importance of attributes for comparison. Criteria values for each inventory policy can be obtained either analytically or by using a simulation technique or they are linguistic subjective judgments defined by fuzzy sets, like, for example, the values of criteria. A methodology is developed and applied to rank the inventory policies.

Keywords: Inventory Policy, Ranking, DBA, Selection criteria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
4477 In vitro Effects of Amygdalin on the Functional Competence of Rabbit Spermatozoa

Authors: Marek Halenár, Eva Tvrdá, Tomáš Slanina, Ľubomír Ondruška, Eduard Kolesár, Peter Massányi, Adriana Kolesárová

Abstract:

The present in vitro study was designed to reveal whether amygdalin (AMG) is able to cause changes to the motility, viability and mitochondrial activity of rabbit spermatozoa. New Zealand White rabbits (n = 10) aged four months were used in the study. Semen samples were collected from each animal and used for the in vitro incubation. The samples were divided into five equal parts and diluted with saline supplemented with 0, 0.5, 1, 2.5 and 5 mg/mL AMG. At times 0h, 3h and 5h spermatozoa motion parameters were assessed using the SpermVision™ computer-aided sperm analysis (CASA) system, cell viability was examined with the metabolic activity (MTT) assay, and the eosin-nigrosin staining technique was used to evaluate the viability of rabbit spermatozoa. All AMG concentrations exhibited stimulating effects on the spermatozoa activity, as shown by a significant preservation of the motility (P<0.05 with respect to 0.5 mg/mL and 1 mg/mL AMG; Time 5 h) and mitochondrial activity (P< 0.05 in case of 0.5 mg/mL AMG; P< 0.01 in case of 1 mg/mL AMG; P < 0.001 with respect to 2.5 mg/mL and 5 mg/mL AMG; Time 5 h). None of the AMG doses supplemented had any significant impact of the spermatozoa viability. In conclusion, the data revealed that short-term co-incubation of spermatozoa with AMG may result in a higher preservation of the sperm structural integrity and functional activity.

Keywords: Amygdalin, CASA, mitochondrial activity, motility, rabbits, spermatozoa, viability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1130
4476 A Model Driven Based Method for Scheduling Analysis and HW/SW Partitioning

Authors: Yessine Hadj Kacem, Adel Mahfoudhi, Hedi Tmar, Mohamed Abid

Abstract:

Unified Modeling Language (UML) extensions for real time embedded systems (RTES) co-design, are taking a growing interest by a great number of industrial and research communities. The extension mechanism is provided by UML profiles for RTES. It aims at improving an easily-understood method of system design for non-experts. On the other hand, one of the key items of the co- design methods is the Hardware/Software partitioning and scheduling tasks. Indeed, it is mandatory to define where and when tasks are implemented and run. Unfortunately the main goals of co-design are not included in the usual practice of UML profiles. So, there exists a need for mapping used models to an execution platform for both schedulability test and HW/SW partitioning. In the present work, test schedulability and design space exploration are performed at an early stage. The proposed approach adopts Model Driven Engineering MDE. It starts from UML specification annotated with the recent profile for the Modeling and Analysis of Real Time Embedded systems MARTE. Following refinement strategy, transformation rules allow to find a feasible schedule that satisfies timing constraints and to define where tasks will be implemented. The overall approach is experimented for the design of a football player robot application.

Keywords: MDE, UML profile, scheduling analysis, HW/SW partitioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1415
4475 An Improved Algorithm for Channel Estimations of OFDM System based Pilot Signal

Authors: Ahmed N. H. Alnuaimy, Mahamod Ismail, Mohd. A. M. Ali, Kasmiran Jumari, Ayman A. El-Saleh

Abstract:

This paper presents a new algorithm for the channel estimation of the OFDM system based on a pilot signal for the new generation of high data rate communication systems. In orthogonal frequency division multiplexing (OFDM) systems over fast-varying fading channels, channel estimation and tracking is generally carried out by transmitting known pilot symbols in given positions of the frequency-time grid. In this paper, we propose to derive an improved algorithm based on the calculation of the mean and the variance of the adjacent pilot signals for a specific distribution of the pilot signals in the OFDM frequency-time grid then calculating of the entire unknown channel coefficients from the equation of the mean and the variance. Simulation results shows that the performance of the OFDM system increase as the length of the channel increase where the accuracy of the estimated channel will be increased using this low complexity algorithm, also the number of the pilot signal needed to be inserted in the OFDM signal will be reduced which lead to increase in the throughput of the signal over the OFDM system in compared with other type of the distribution such as Comb type and Block type channel estimation.

Keywords: Channel estimation, orthogonal frequency divisionmultiplexing (OFDM), comb type channel estimation, block typechannel estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1392
4474 Fingerprint on Ballistic after Shooting

Authors: Narong Kulnides

Abstract:

This research involved fingerprints on ballistics after shooting. Two objectives of research were as follow; (1) to study the duration of the existence of latent fingerprints on .38, .45, 9 mm and .223 cartridge case after shooting, and (2) to compare the effectiveness of the detection of latent fingerprints by Black Powder, Super Glue, Perma Blue and Gun Bluing. The latent fingerprint appearance were studied on .38, .45, 9 mm. and .223 cartridge cases before and after shooting with Black Powder, Super Glue, Perma Blue and Gun Bluing. The detection times were 3 minute, 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72, 78 and 84 hours respectively. As a result of the study, it can be conclude that

  1. Before shooting, the detection of latent fingerprints on 38, .45, and 9 mm. and .223 cartridge cases with Black Powder, Super Glue, Perma Blue and Gun Bluing can detect the fingerprints at all detection times.
  2. After shooting, the detection of latent fingerprints on .38, .45, 9 mm. and .223 cartridge cases with Black Powder, Super Glue did not appear. The detection of latent fingerprints on .38, .45, 9 mm. cartridge cases with Perma Blue and Gun Bluing were found 100% of the time and the detection of latent fingerprints on .223 cartridge cases with Perma Blue and Gun Bluing were found 40% and 46.67% of the time, respectively.

Keywords: Ballistic, Fingerprint, Shooting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2575
4473 Technology for Enhancing the Learning and Teaching Experience in Higher Education

Authors: Sara M. Ismael, Ali H. Al-Badi

Abstract:

The rapid development and growth of technology has changed the method of obtaining information for educators and learners. Technology has created a new world of collaboration and communication among people. Incorporating new technology into the teaching process can enhance learning outcomes. Billions of individuals across the world are now connected together, and are cooperating and contributing their knowledge and intelligence. Time is no longer wasted in waiting until the teacher is ready to share information as learners can go online and get it immediatelt.

The objectives of this paper are to understand the reasons why changes in teaching and learning methods are necessary, to find ways of improving them, and to investigate the challenges that present themselves in the adoption of new ICT tools in higher education institutes.

 To achieve these objectives two primary research methods were used: questionnaires, which were distributed among students at higher educational institutes and multiple interviews with faculty members (teachers) from different colleges and universities, which were conducted to find out why teaching and learning methodology should change.

The findings show that both learners and educators agree that educational technology plays a significant role in enhancing instructors’ teaching style and students’ overall learning experience; however, time constraints, privacy issues, and not being provided with enough up-to-date technology do create some challenges.

Keywords: E-books, educational technology, educators, e-learning, learners, social media, Web 2.0, LMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2296
4472 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1184
4471 Monte Carlo and Biophysics Analysis in a Criminal Trial

Authors: Luca Indovina, Carmela Coppola, Carlo Altucci, Riccardo Barberi, Rocco Romano

Abstract:

In this paper a real court case, held in Italy at the Court of Nola, in which a correct physical description, conducted with both a Monte Carlo and biophysical analysis, would have been sufficient to arrive at conclusions confirmed by documentary evidence, is considered. This will be an example of how forensic physics can be useful in confirming documentary evidence in order to reach hardly questionable conclusions. This was a libel trial in which the defendant, Mr. DS (Defendant for Slander), had falsely accused one of his neighbors, Mr. OP (Offended Person), of having caused him some damages. The damages would have been caused by an external plaster piece that would have detached from the neighbor’s property and would have hit Mr DS while he was in his garden, much more than a meter far away from the facade of the building from which the plaster piece would have detached. In the trial, Mr. DS claimed to have suffered a scratch on his forehead, but he never showed the plaster that had hit him, nor was able to tell from where the plaster would have arrived. Furthermore, Mr. DS presented a medical certificate with a diagnosis of contusion of the cerebral cortex. On the contrary, the images of Mr. OP’s security cameras do not show any movement in the garden of Mr. DS in a long interval of time (about 2 hours) around the time of the alleged accident, nor do they show any people entering or coming out from the house of Mr. DS in the same interval of time. Biophysical analysis shows that both the diagnosis of the medical certificate and the wound declared by the defendant, already in conflict with each other, are not compatible with the fall of external plaster pieces too small to be found. The wind was at a level 1 of the Beaufort scale, that is, unable to raise even dust (level 4 of the Beaufort scale). Therefore, the motion of the plaster pieces can be described as a projectile motion, whereas collisions with the building cornice can be treated using Newtons law of coefficients of restitution. Numerous numerical Monte Carlo simulations show that the pieces of plaster would not have been able to reach even the garden of Mr. DS, let alone a distance over 1.30 meters. Results agree with the documentary evidence (images of Mr. OP’s security cameras) that Mr. DS could not have been hit by plaster pieces coming from Mr. OP’s property.

Keywords: Biophysical analysis, Monte Carlo simulations, Newton’s law of restitution, projectile motion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 574
4470 Semantic Modeling of Management Information: Enabling Automatic Reasoning on DMTF-CIM

Authors: Fernando Alonso, Rafael Fernandez, Sonia Frutos, Javier Soriano

Abstract:

CIM is the standard formalism for modeling management information developed by the Distributed Management Task Force (DMTF) in the context of its WBEM proposal, designed to provide a conceptual view of the managed environment. In this paper, we propose the inclusion of formal knowledge representation techniques, based on Description Logics (DLs) and the Web Ontology Language (OWL), in CIM-based conceptual modeling, and then we examine the benefits of such a decision. The proposal is specified as a CIM metamodel level mapping to a highly expressive subset of DLs capable of capturing all the semantics of the models. The paper shows how the proposed mapping can be used for automatic reasoning about the management information models, as a design aid, by means of new-generation CASE tools, thanks to the use of state-of-the-art automatic reasoning systems that support the proposed logic and use algorithms that are sound and complete with respect to the semantics. Such a CASE tool framework has been developed by the authors and its architecture is also introduced. The proposed formalization is not only useful at design time, but also at run time through the use of rational autonomous agents, in response to a need recently recognized by the DMTF.

Keywords: CIM, Knowledge-based Information Models, Ontology Languages, OWL, Description Logics, Integrated Network Management, Intelligent Agents, Automatic Reasoning Techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
4469 Development and Characterization of Wheat Bread with Lupin Flour

Authors: Paula M. R. Correia, Marta Gonzaga, Luis M. Batista, Luísa Beirão-Costa, Raquel F. P. Guiné

Abstract:

The purpose of the present work was to develop an innovative food product with good textural and sensorial characteristics. The product, a new type of bread, was prepared with wheat (90%) and lupin (10%) flours, without the addition of any conservatives. Several experiences were also done to find the most appropriate proportion of lupin flour. The optimized product was characterized considering the rheological, physical-chemical and sensorial properties. The water absorption of wheat flour with 10% of lupin was higher than that of the normal wheat flours, and Wheat Ceres flour presented the lower value, with lower dough development time and high stability time. The breads presented low moisture but a considerable water activity. The density of bread decreased with the introduction of lupin flour. The breads were quite white, and during storage the colour parameters decreased. The lupin flour clearly increased the number of alveolus, but the total area increased significantly just for the Wheat Cerealis bread. The addition of lupin flour increased the hardness and chewiness of breads, but the elasticity did not vary significantly. Lupin bread was sensorially similar to wheat bread produced with WCerealis flour, and the main differences are the crust rugosity, colour and alveolus characteristics.

Keywords: Lupin flour, physical-chemical properties, sensorial analysis, wheat flour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2512
4468 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV

Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran

Abstract:

In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with a multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) decompilation of the video stream into individual frames; (2) establishing the interior camera orientation parameters; (3) determining the relative orientation parameters for each video frame with respect to each other; (4) finding the absolute orientation parameters, using a self-calibration bundle and adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a mosaic image of the test area, which is then merged with a well referenced existing digital map for the purpose of geo-referencing and aerial surveillance. A test field located in Abuja, Nigeria was used to evaluate our method. Video and telemetry data were collected for about fifteen minutes, and they were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images is more accurate when compared with those from original perspective images when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 metres.

Keywords: Geo-referencing, ortho-rectification, video frame, self-calibration, UAV, target tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1599
4467 Mixed Model Assembly Line Sequencing In Make to Order System with Available to Promise Consideration

Authors: N. Manavizadeh, A. Dehghani, M. Rabbani

Abstract:

Mixed model assembly lines (MMAL) are a type of production line where a variety of product models similar in product characteristics are assembled. The effective design of these lines requires that schedule for assembling the different products is determined. In this paper we tried to fit the sequencing problem with the main characteristics of make to order (MTO) environment. The problem solved in this paper is a multiple objective sequencing problem in mixed model assembly lines sequencing using weighted Sum Method (WSM) using GAMS software for small problem and an effective GA for large scale problems because of the nature of NP-hardness of our problem and vast time consume to find the optimum solution in large problems. In this problem three practically important objectives are minimizing: total utility work, keeping a constant production rate variation, and minimizing earliness and tardiness cost which consider the priority of each customer and different due date which is a real situation in mixed model assembly lines and it is the first time we consider different attribute to prioritize the customers which help the company to reduce the cost of earliness and tardiness. This mechanism is a way to apply an advance available to promise (ATP) in mixed model assembly line sequencing which is the main contribution of this paper.

Keywords: Available to promise, Earliness & Tardiness, GA, Mixed-Model assembly line Sequencing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2510
4466 Socio-Technical Systems: Transforming Theory into Practice

Authors: L. Ngowi, N. H. Mvungi

Abstract:

This paper critically examines the evolution of socio-technical systems theory, its practices, and challenges in system design and development. It examines concepts put forward by researchers focusing on the application of the theory in software engineering. There are various methods developed that use socio-technical concepts based on systems engineering without remarkable success. The main constraint is the large amount of data and inefficient techniques used in the application of the concepts in system engineering for developing time-bound systems and within a limited/controlled budget. This paper critically examines each of the methods, highlight bottlenecks and suggest the way forward. Since socio-technical systems theory only explains what to do, but not how doing it, hence engineers are not using the concept to save time, costs and reduce risks associated with new frameworks. Hence, a new framework, which can be considered as a practical approach is proposed that borrows concepts from soft systems method, agile systems development and object-oriented analysis and design to bridge the gap between theory and practice. The approach will enable the development of systems using socio-technical systems theory to attract/enable the system engineers/software developers to use socio-technical systems theory in building worthwhile information systems to avoid fragilities and hostilities in the work environment.

Keywords: Socio-technical systems, human centered design, software engineering, cognitive engineering, soft systems, systems engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2787
4465 Differences in Goal Scoring and Passing Sequences between Winning and Losing Team in UEFA-EURO Championship 2012

Authors: Muhamad S., Norasrudin S, Rahmat A.

Abstract:

The objective of current study is to investigate the differences of winning and losing teams in terms of goal scoring and passing sequences. Total of 31 matches from UEFA-EURO 2012 were analyzed and 5 matches were excluded from analysis due to matches end up drawn. There are two groups of variable used in the study which is; i. the goal scoring variable and: ii. passing sequences variable. Data were analyzed using Wilcoxon matched pair rank test with significant value set at p < 0.05. Current study found the timing of goal scored was significantly higher for winning team at 1st half (Z=-3.416, p=.001) and 2nd half (Z=-3.252, p=.001). The scoring frequency was also found to be increase as time progressed and the last 15 minutes of the game was the time interval the most goals scored. The indicators that were significantly differences between winning and losing team were the goal scored (Z=-4.578, p=.000), the head (Z=-2.500, p=.012), the right foot (Z=-3.788,p=.000), corner (Z=-.2.126,p=.033), open play (Z=-3.744,p=.000), inside the penalty box (Z=-4.174, p=.000) , attackers (Z=-2.976, p=.003) and also the midfielders (Z=-3.400, p=.001). Regarding the passing sequences, there are significance difference between both teams in short passing sequences (Z=-.4.141, p=.000). While for the long passing, there were no significance difference (Z=-.1.795, p=.073). The data gathered in present study can be used by the coaches to construct detailed training program based on their objectives.

Keywords: Football, goals scored, passing, timing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2815
4464 Economical and Technical Analysis of Urban Transit System Selection Using TOPSIS Method According to Constructional and Operational Aspects

Authors: Ali Abdi Kordani, Meysam Rooyintan, Sid Mohammad Boroomandrad

Abstract:

Nowadays, one the most important problems in megacities is public transportation and satisfying citizens from this system in order to decrease the traffic congestions and air pollution. Accordingly, to improve the transit passengers and increase the travel safety, new transportation systems such as Bus Rapid Transit (BRT), tram, and monorail have expanded that each one has different merits and demerits. That is why comparing different systems for a systematic selection of public transportation systems in a big city like Tehran, which has numerous problems in terms of traffic and pollution, is essential. In this paper, it is tried to investigate the advantages and feasibility of using monorail, tram and BRT systems, which are widely used in most of megacities in all over the world. In Tehran, by using SPSS statistical analysis software and TOPSIS method, these three modes are compared to each other and their results will be assessed. Experts, who are experienced in the transportation field, answer the prepared matrix questionnaire to select each public transportation mode (tram, monorail, and BRT). The results according to experts’ judgments represent that monorail has the first priority, Tram has the second one, and BRT has the third one according to the considered indices like execution costs, wasting time, depreciation, pollution, operation costs, travel time, passenger satisfaction, benefit to cost ratio and traffic congestion.

Keywords: Bus Rapid Transit, Costs, Monorail, Pollution, Tram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650
4463 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

Authors: Saleem Z. Ramadan

Abstract:

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the Pth percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Keywords: Reliability, Accelerated life testing, Cumulative exposure model, Bayesian estimation, Progressive Type-I censoring, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2139
4462 Effects of Thermal Radiation and Magnetic Field on Unsteady Stretching Permeable Sheet in Presence of Free Stream Velocity

Authors: Phool Singh, Ashok Jangid, N. S. Tomer, Deepa Sinha

Abstract:

The aim of this paper is to investigate twodimensional unsteady flow of a viscous incompressible fluid about stagnation point on permeable stretching sheet in presence of time dependent free stream velocity. Fluid is considered in the influence of transverse magnetic field in the presence of radiation effect. Rosseland approximation is use to model the radiative heat transfer. Using time-dependent stream function, partial differential equations corresponding to the momentum and energy equations are converted into non-linear ordinary differential equations. Numerical solutions of these equations are obtained by using Runge-Kutta Fehlberg method with the help of Newton-Raphson shooting technique. In the present work the effect of unsteadiness parameter, magnetic field parameter, radiation parameter, stretching parameter and the Prandtl number on flow and heat transfer characteristics have been discussed. Skin-friction coefficient and Nusselt number at the sheet are computed and discussed. The results reported in the paper are in good agreement with published work in literature by other researchers.

Keywords: Magneto hydrodynamics, stretching sheet, thermal radiation, unsteady flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2252
4461 Solar Radiation Time Series Prediction

Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs

Abstract:

A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.

Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2735
4460 Optimization of Ethanol Fermentation from Pineapple Peel Extract Using Response Surface Methodology (RSM)

Authors: Nadya Hajar, Zainal, S., Atikah, O., Tengku Elida, T. Z. M.

Abstract:

Ethanol has been known for a long time, being perhaps the oldest product obtained through traditional biotechnology fermentation. Agriculture waste as substrate in fermentation is vastly discussed as alternative to replace edible food and utilization of organic material. Pineapple peel, highly potential source as substrate is a by-product of the pineapple processing industry. Bio-ethanol from pineapple (Ananas comosus) peel extract was carried out by controlling fermentation without any treatment. Saccharomyces ellipsoides was used as inoculum in this fermentation process as it is naturally found at the pineapple skin. In this study, the capability of Response Surface Methodology (RSM) for optimization of ethanol production from pineapple peel extract using Saccharomyces ellipsoideus in batch fermentation process was investigated. Effect of five test variables in a defined range of inoculum concentration 6- 14% (v/v), pH (4.0-6.0), sugar concentration (14-22°Brix), temperature (24-32°C) and time of incubation (30-54 hrs) on the ethanol production were evaluated. Data obtained from experiment were analyzed with RSM of MINITAB Software (Version 15) whereby optimum ethanol concentration of 8.637% (v/v) was determined. The optimum condition of 14% (v/v) inoculum concentration, pH 6, 22°Brix, 26°C and 30hours of incubation. The significant regression equation or model at the 5% level with correlation value of 99.96% was also obtained.

Keywords: Bio-ethanol, pineapple peel extract, Response Surface Methodology (RSM), Saccharomyces ellipsoideus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6050
4459 Computational Modeling in Strategic Marketing

Authors: Petr Cernohorsky, Jan Voracek

Abstract:

Well-developed strategic marketing planning is the essential prerequisite for establishment of the right and unique competitive advantage. Typical market, however, is a heterogeneous and decentralized structure with natural involvement of individual or group subjectivity and irrationality. These features cannot be fully expressed with one-shot rigorous formal models based on, e.g. mathematics, statistics or empirical formulas. We present an innovative solution, extending the domain of agent based computational economics towards the concept of hybrid modeling in service provider and consumer market such as telecommunications. The behavior of the market is described by two classes of agents - consumer and service provider agents - whose internal dynamics are fundamentally different. Customers are rather free multi-state structures, adjusting behavior and preferences quickly in accordance with time and changing environment. Producers, on the contrary, are traditionally structured companies with comparable internal processes and specific managerial policies. Their business momentum is higher and immediate reaction possibilities limited. This limitation underlines importance of proper strategic planning as the main process advising managers in time whether to continue with more or less the same business or whether to consider the need for future structural changes that would ensure retention of existing customers or acquisition of new ones.

Keywords: Agent-based computational economics, hybrid modeling, strategic marketing, system dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626
4458 Research Regarding Resistance Characteristics of Biscuits Assortment Using Cone Penetrometer

Authors: G.–A. Constantin, G. Voicu, E.–M. Stefan, P. Tudor, G. Paraschiv, M.–G. Munteanu

Abstract:

In the activity of handling and transport of food products, the products may be subjected to mechanical stresses that may lead to their deterioration by deformation, breaking, or crushing. This is the case for biscuits, regardless of their type (gluten-free or sugary), the addition of ingredients or flour from which they are made. However, gluten-free biscuits have a higher mechanical resistance to breakage or crushing compared to easily shattered sugar biscuits (especially those for children). The paper presents the results of the experimental evaluation of the texture for four varieties of commercial biscuits, using the penetrometer equipped with needle cone at five different additional weights on the cone-rod. The assortments of biscuits tested in the laboratory were Petit Beurre, Picnic, and Maia (all three manufactured by RoStar, Romania) and Sultani diet biscuits, manufactured by Eti Burcak Sultani (Turkey, in packs of 138 g). For the four varieties of biscuits and the five additional weights (50, 77, 100, 150 and 177 g), the experimental data obtained were subjected to regression analysis in the MS Office Excel program, using Velon's relationship (h = a∙ln(t) + b). The regression curves were analysed comparatively in order to identify possible differences and to highlight the variation of the penetration depth h, in relation to the time t. Based on the penetration depth between two-time intervals (every 5 seconds), the curves of variation of the penetration speed in relation to time were then drawn. It was found that Velon's law verifies the experimental data for all assortments of biscuits and for all five additional weights. The correlation coefficient R2 had in most of the analysed cases values over 0.850. The values recorded for the penetration depth were framed, in general, within 45-55 p.u. (penetrometric units) at an additional mass of 50 g, respectively between 155-168 p.u., at an additional mass of 177 g, at Petit Beurre biscuits. For Sultani diet biscuits, the values of the penetration depth were within the limits of 32-35 p.u., at an additional weight of 50 g and between 80-114 p.u., at an additional weight of 177g. The data presented in the paper can be used by both operators on the manufacturing technology flow, as well as by the traders of these food products, in order to establish the most efficient parametric of the working regimes (when packaging and handling).

Keywords: Biscuits resistance/texture, penetration depth, penetration velocity, sharp pin penetrometer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 586
4457 Addressing Scalability Issues of Named Entity Recognition Using Multi-Class Support Vector Machines

Authors: Mona Soliman Habib

Abstract:

This paper explores the scalability issues associated with solving the Named Entity Recognition (NER) problem using Support Vector Machines (SVM) and high-dimensional features. The performance results of a set of experiments conducted using binary and multi-class SVM with increasing training data sizes are examined. The NER domain chosen for these experiments is the biomedical publications domain, especially selected due to its importance and inherent challenges. A simple machine learning approach is used that eliminates prior language knowledge such as part-of-speech or noun phrase tagging thereby allowing for its applicability across languages. No domain-specific knowledge is included. The accuracy measures achieved are comparable to those obtained using more complex approaches, which constitutes a motivation to investigate ways to improve the scalability of multiclass SVM in order to make the solution more practical and useable. Improving training time of multi-class SVM would make support vector machines a more viable and practical machine learning solution for real-world problems with large datasets. An initial prototype results in great improvement of the training time at the expense of memory requirements.

Keywords: Named entity recognition, support vector machines, language independence, bioinformatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673
4456 Purity Monitor Studies in Medium Liquid Argon TPC

Authors: I. Badhrees

Abstract:

This paper is an attempt to describe some of the results that had been found through a journey of study in the field of particle physics. This study consists of two parts, one about the measurement of the cross section of the decay of the Z particle in two electrons, and the other deals with the measurement of the cross section of the multi-photon absorption process using a beam of Laser in the Liquid Argon Time Projection Chamber.

The first part of the paper concerns the results based on the analysis of a data sample containing 8120 ee candidates to reconstruct the mass of the Z particle for each event where each event has an ee pair with PT(e) > 20GeV, and η(e) < 2.5. Monte Carlo templates of the reconstructed Z particle were produced as a function of the Z mass scale. The distribution of the reconstructed Z mass in the data was compared to the Monte Carlo templates, where the total cross section is calculated to be equal to 1432pb.

The second part concerns the Liquid Argon Time Projection Chamber, LAr TPC, the results of the interaction of the UV Laser, Nd-YAG with λ= 266mm, with LAr and through the study of the multi-photon ionization process as a part of the R&D at Bern University. The main result of this study was the cross section of the process of the multi-photon ionization process of the LAr, σe = 1.24±0.10stat±0.30sys.10 -56cm4.

Keywords: ATLAS, CERN, KACST, LArTPC, Particle Physics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
4455 Efficient Program Slicing Algorithms for Measuring Functional Cohesion and Parallelism

Authors: Jehad Al Dallal

Abstract:

Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.

Keywords: Backward slicing, cohesion measure, forward slicing, parallelism measure, program dependence graph, program slicing, static slicing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1431
4454 An FPGA Implementation of Intelligent Visual Based Fall Detection

Authors: Peng Shen Ong, Yoong Choon Chang, Chee Pun Ooi, Ettikan K. Karuppiah, Shahirina Mohd Tahir

Abstract:

Falling has been one of the major concerns and threats to the independence of the elderly in their daily lives. With the worldwide significant growth of the aging population, it is essential to have a promising solution of fall detection which is able to operate at high accuracy in real-time and supports large scale implementation using multiple cameras. Field Programmable Gate Array (FPGA) is a highly promising tool to be used as a hardware accelerator in many emerging embedded vision based system. Thus, it is the main objective of this paper to present an FPGA-based solution of visual based fall detection to meet stringent real-time requirements with high accuracy. The hardware architecture of visual based fall detection which utilizes the pixel locality to reduce memory accesses is proposed. By exploiting the parallel and pipeline architecture of FPGA, our hardware implementation of visual based fall detection using FGPA is able to achieve a performance of 60fps for a series of video analytical functions at VGA resolutions (640x480). The results of this work show that FPGA has great potentials and impacts in enabling large scale vision system in the future healthcare industry due to its flexibility and scalability.

Keywords: Fall detection, FPGA, hardware implementation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2445
4453 Nonlinear Transformation of Laser Generated Ultrasonic Pulses in Geomaterials

Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas

Abstract:

Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus “GEOSCAN-02M”. Ultrasonic pulses are excited by the pulses of Qswitched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach- Mayergoyz and can be used for the location of cracks in the optically opaque materials.

Keywords: Cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
4452 An Experimental Study on the Optimum Installation of Fire Detector for Early Stage Fire Detecting in Rack-Type Warehouses

Authors: Ki Ok Choi, Sung Ho Hong, Dong Suck Kim, Don Mook Choi

Abstract:

Rack type warehouses are different from general buildings in the kinds, amount, and arrangement of stored goods, so the fire risk of rack type warehouses is different from those buildings. The fire pattern of rack type warehouses is different in combustion characteristic and storing condition of stored goods. The initial fire burning rate is different in the surface condition of materials, but the running time of fire is closely related with the kinds of stored materials and stored conditions. The stored goods of the warehouse are consisted of diverse combustibles, combustible liquid, and so on. Fire detection time may be delayed because the residents are less than office and commercial buildings. If fire detectors installed in rack type warehouses are inadaptable, the fire of the warehouse may be the great fire because of delaying of fire detection. In this paper, we studied what kinds of fire detectors are optimized in early detecting of rack type warehouse fire by real-scale fire tests. The fire detectors used in the tests are rate of rise type, fixed type, photo electric type, and aspirating type detectors. We considered optimum fire detecting method in rack type warehouses suggested by the response characteristic and comparative analysis of the fire detectors.

Keywords: Fire detector, rack, response characteristic, warehouse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 962
4451 A Neural Network Control for Voltage Balancing in Three-Phase Electric Power System

Authors: Dana M. Ragab, Jasim A. Ghaeb

Abstract:

The three-phase power system suffers from different challenging problems, e.g. voltage unbalance conditions at the load side. The voltage unbalance usually degrades the power quality of the electric power system. Several techniques can be considered for load balancing including load reconfiguration, static synchronous compensator and static reactive power compensator. In this work an efficient neural network is designed to control the unbalanced condition in the Aqaba-Qatrana-South Amman (AQSA) electric power system. It is designed for highly enhanced response time of the reactive compensator for voltage balancing. The neural network is developed to determine the appropriate set of firing angles required for the thyristor-controlled reactor to balance the three load voltages accurately and quickly. The parameters of AQSA power system are considered in the laboratory model, and several test cases have been conducted to test and validate the proposed technique capabilities. The results have shown a high performance of the proposed Neural Network Control (NNC) technique for correcting the voltage unbalance conditions at three-phase load based on accuracy and response time.

Keywords: Three-phase power system, reactive power control, voltage unbalance factor, neural network, power quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970
4450 Data Recording for Remote Monitoring of Autonomous Vehicles

Authors: Rong-Terng Juang

Abstract:

Autonomous vehicles offer the possibility of significant benefits to social welfare. However, fully automated cars might not be going to happen in the near further. To speed the adoption of the self-driving technologies, many governments worldwide are passing laws requiring data recorders for the testing of autonomous vehicles. Currently, the self-driving vehicle, (e.g., shuttle bus) has to be monitored from a remote control center. When an autonomous vehicle encounters an unexpected driving environment, such as road construction or an obstruction, it should request assistance from a remote operator. Nevertheless, large amounts of data, including images, radar and lidar data, etc., have to be transmitted from the vehicle to the remote center. Therefore, this paper proposes a data compression method of in-vehicle networks for remote monitoring of autonomous vehicles. Firstly, the time-series data are rearranged into a multi-dimensional signal space. Upon the arrival, for controller area networks (CAN), the new data are mapped onto a time-data two-dimensional space associated with the specific CAN identity. Secondly, the data are sampled based on differential sampling. Finally, the whole set of data are encoded using existing algorithms such as Huffman, arithmetic and codebook encoding methods. To evaluate system performance, the proposed method was deployed on an in-house built autonomous vehicle. The testing results show that the amount of data can be reduced as much as 1/7 compared to the raw data.

Keywords: Autonomous vehicle, data recording, remote monitoring, controller area network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1316