Search results for: Time dependent
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7045

Search results for: Time dependent

4435 Fingerprint on Ballistic after Shooting

Authors: Narong Kulnides

Abstract:

This research involved fingerprints on ballistics after shooting. Two objectives of research were as follow; (1) to study the duration of the existence of latent fingerprints on .38, .45, 9 mm and .223 cartridge case after shooting, and (2) to compare the effectiveness of the detection of latent fingerprints by Black Powder, Super Glue, Perma Blue and Gun Bluing. The latent fingerprint appearance were studied on .38, .45, 9 mm. and .223 cartridge cases before and after shooting with Black Powder, Super Glue, Perma Blue and Gun Bluing. The detection times were 3 minute, 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72, 78 and 84 hours respectively. As a result of the study, it can be conclude that

  1. Before shooting, the detection of latent fingerprints on 38, .45, and 9 mm. and .223 cartridge cases with Black Powder, Super Glue, Perma Blue and Gun Bluing can detect the fingerprints at all detection times.
  2. After shooting, the detection of latent fingerprints on .38, .45, 9 mm. and .223 cartridge cases with Black Powder, Super Glue did not appear. The detection of latent fingerprints on .38, .45, 9 mm. cartridge cases with Perma Blue and Gun Bluing were found 100% of the time and the detection of latent fingerprints on .223 cartridge cases with Perma Blue and Gun Bluing were found 40% and 46.67% of the time, respectively.

Keywords: Ballistic, Fingerprint, Shooting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2583
4434 Technology for Enhancing the Learning and Teaching Experience in Higher Education

Authors: Sara M. Ismael, Ali H. Al-Badi

Abstract:

The rapid development and growth of technology has changed the method of obtaining information for educators and learners. Technology has created a new world of collaboration and communication among people. Incorporating new technology into the teaching process can enhance learning outcomes. Billions of individuals across the world are now connected together, and are cooperating and contributing their knowledge and intelligence. Time is no longer wasted in waiting until the teacher is ready to share information as learners can go online and get it immediatelt.

The objectives of this paper are to understand the reasons why changes in teaching and learning methods are necessary, to find ways of improving them, and to investigate the challenges that present themselves in the adoption of new ICT tools in higher education institutes.

 To achieve these objectives two primary research methods were used: questionnaires, which were distributed among students at higher educational institutes and multiple interviews with faculty members (teachers) from different colleges and universities, which were conducted to find out why teaching and learning methodology should change.

The findings show that both learners and educators agree that educational technology plays a significant role in enhancing instructors’ teaching style and students’ overall learning experience; however, time constraints, privacy issues, and not being provided with enough up-to-date technology do create some challenges.

Keywords: E-books, educational technology, educators, e-learning, learners, social media, Web 2.0, LMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2311
4433 Monte Carlo and Biophysics Analysis in a Criminal Trial

Authors: Luca Indovina, Carmela Coppola, Carlo Altucci, Riccardo Barberi, Rocco Romano

Abstract:

In this paper a real court case, held in Italy at the Court of Nola, in which a correct physical description, conducted with both a Monte Carlo and biophysical analysis, would have been sufficient to arrive at conclusions confirmed by documentary evidence, is considered. This will be an example of how forensic physics can be useful in confirming documentary evidence in order to reach hardly questionable conclusions. This was a libel trial in which the defendant, Mr. DS (Defendant for Slander), had falsely accused one of his neighbors, Mr. OP (Offended Person), of having caused him some damages. The damages would have been caused by an external plaster piece that would have detached from the neighbor’s property and would have hit Mr DS while he was in his garden, much more than a meter far away from the facade of the building from which the plaster piece would have detached. In the trial, Mr. DS claimed to have suffered a scratch on his forehead, but he never showed the plaster that had hit him, nor was able to tell from where the plaster would have arrived. Furthermore, Mr. DS presented a medical certificate with a diagnosis of contusion of the cerebral cortex. On the contrary, the images of Mr. OP’s security cameras do not show any movement in the garden of Mr. DS in a long interval of time (about 2 hours) around the time of the alleged accident, nor do they show any people entering or coming out from the house of Mr. DS in the same interval of time. Biophysical analysis shows that both the diagnosis of the medical certificate and the wound declared by the defendant, already in conflict with each other, are not compatible with the fall of external plaster pieces too small to be found. The wind was at a level 1 of the Beaufort scale, that is, unable to raise even dust (level 4 of the Beaufort scale). Therefore, the motion of the plaster pieces can be described as a projectile motion, whereas collisions with the building cornice can be treated using Newtons law of coefficients of restitution. Numerous numerical Monte Carlo simulations show that the pieces of plaster would not have been able to reach even the garden of Mr. DS, let alone a distance over 1.30 meters. Results agree with the documentary evidence (images of Mr. OP’s security cameras) that Mr. DS could not have been hit by plaster pieces coming from Mr. OP’s property.

Keywords: Biophysical analysis, Monte Carlo simulations, Newton’s law of restitution, projectile motion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 593
4432 Semantic Modeling of Management Information: Enabling Automatic Reasoning on DMTF-CIM

Authors: Fernando Alonso, Rafael Fernandez, Sonia Frutos, Javier Soriano

Abstract:

CIM is the standard formalism for modeling management information developed by the Distributed Management Task Force (DMTF) in the context of its WBEM proposal, designed to provide a conceptual view of the managed environment. In this paper, we propose the inclusion of formal knowledge representation techniques, based on Description Logics (DLs) and the Web Ontology Language (OWL), in CIM-based conceptual modeling, and then we examine the benefits of such a decision. The proposal is specified as a CIM metamodel level mapping to a highly expressive subset of DLs capable of capturing all the semantics of the models. The paper shows how the proposed mapping can be used for automatic reasoning about the management information models, as a design aid, by means of new-generation CASE tools, thanks to the use of state-of-the-art automatic reasoning systems that support the proposed logic and use algorithms that are sound and complete with respect to the semantics. Such a CASE tool framework has been developed by the authors and its architecture is also introduced. The proposed formalization is not only useful at design time, but also at run time through the use of rational autonomous agents, in response to a need recently recognized by the DMTF.

Keywords: CIM, Knowledge-based Information Models, Ontology Languages, OWL, Description Logics, Integrated Network Management, Intelligent Agents, Automatic Reasoning Techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
4431 Development and Characterization of Wheat Bread with Lupin Flour

Authors: Paula M. R. Correia, Marta Gonzaga, Luis M. Batista, Luísa Beirão-Costa, Raquel F. P. Guiné

Abstract:

The purpose of the present work was to develop an innovative food product with good textural and sensorial characteristics. The product, a new type of bread, was prepared with wheat (90%) and lupin (10%) flours, without the addition of any conservatives. Several experiences were also done to find the most appropriate proportion of lupin flour. The optimized product was characterized considering the rheological, physical-chemical and sensorial properties. The water absorption of wheat flour with 10% of lupin was higher than that of the normal wheat flours, and Wheat Ceres flour presented the lower value, with lower dough development time and high stability time. The breads presented low moisture but a considerable water activity. The density of bread decreased with the introduction of lupin flour. The breads were quite white, and during storage the colour parameters decreased. The lupin flour clearly increased the number of alveolus, but the total area increased significantly just for the Wheat Cerealis bread. The addition of lupin flour increased the hardness and chewiness of breads, but the elasticity did not vary significantly. Lupin bread was sensorially similar to wheat bread produced with WCerealis flour, and the main differences are the crust rugosity, colour and alveolus characteristics.

Keywords: Lupin flour, physical-chemical properties, sensorial analysis, wheat flour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2530
4430 Application of a Theoretical Framework as a Context for a Travel Behavior Change Policy Intervention

Authors: F. Moghtaderi, M. Burke, J. Troelsen

Abstract:

There has been a significant decline in active travel and a massive increase in the use of car dependent travel in many countries during the past two decades. Evidential risks for people’s physical and mental health problems are correlated with this increased use of motorized travel. These health related problems range from overweight and obesity to increased air pollution. In response to these rising concerns health professionals, traffic planers, local authorities and others have introduced a variety of initiatives to counterbalance the dominance of cars for daily journeys. However, the nature of travel behavior change interventions, which aim to reduce car use, are very complex and challenging regarding their interactions with human behavior. To change travel behavior at least two aspects have to be taken into consideration. First, how to alter attitudes and perceptions toward the sustainable and healthy modes of travel, in competition with experiences of private car use. And second, how to make these behavior change processes irreversible and sustainable. There are no comprehensive models available to guide policy interventions to increase the level of success of travel behavior change interventions across both these dimensions. A comprehensive theoretical framework is required in the effort to optimize how to facilitate and guide the processes of data collection and analysis to achieve the best possible guidelines for policy makers. Regarding the gaps in the travel behavior change research literature, this paper attempted to identify and suggest a multidimensional framework in order to facilitate planning the implemented travel behavior change interventions. A structured mixed-method model is suggested to improve the analytic power of the results according to the complexity of human behavior. In order to recognize people’s attitudes towards a specific travel mode, the Theory of Planned Behavior (TPB) was operationalized. But in order to capture decision making processes the Transtheoretical model of Behavior Change (TTM) was also used. Consequently, the combination of these two theories (TTM and TPB) has resulted in a synthesis with appropriate concepts to identify and design an implemented travel behavior change interventions.

Keywords: Behavior change theories, Theoretical framework, Travel behavior change interventions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2859
4429 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV

Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran

Abstract:

In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with a multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) decompilation of the video stream into individual frames; (2) establishing the interior camera orientation parameters; (3) determining the relative orientation parameters for each video frame with respect to each other; (4) finding the absolute orientation parameters, using a self-calibration bundle and adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a mosaic image of the test area, which is then merged with a well referenced existing digital map for the purpose of geo-referencing and aerial surveillance. A test field located in Abuja, Nigeria was used to evaluate our method. Video and telemetry data were collected for about fifteen minutes, and they were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images is more accurate when compared with those from original perspective images when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 metres.

Keywords: Geo-referencing, ortho-rectification, video frame, self-calibration, UAV, target tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608
4428 Mixed Model Assembly Line Sequencing In Make to Order System with Available to Promise Consideration

Authors: N. Manavizadeh, A. Dehghani, M. Rabbani

Abstract:

Mixed model assembly lines (MMAL) are a type of production line where a variety of product models similar in product characteristics are assembled. The effective design of these lines requires that schedule for assembling the different products is determined. In this paper we tried to fit the sequencing problem with the main characteristics of make to order (MTO) environment. The problem solved in this paper is a multiple objective sequencing problem in mixed model assembly lines sequencing using weighted Sum Method (WSM) using GAMS software for small problem and an effective GA for large scale problems because of the nature of NP-hardness of our problem and vast time consume to find the optimum solution in large problems. In this problem three practically important objectives are minimizing: total utility work, keeping a constant production rate variation, and minimizing earliness and tardiness cost which consider the priority of each customer and different due date which is a real situation in mixed model assembly lines and it is the first time we consider different attribute to prioritize the customers which help the company to reduce the cost of earliness and tardiness. This mechanism is a way to apply an advance available to promise (ATP) in mixed model assembly line sequencing which is the main contribution of this paper.

Keywords: Available to promise, Earliness & Tardiness, GA, Mixed-Model assembly line Sequencing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2524
4427 Socio-Technical Systems: Transforming Theory into Practice

Authors: L. Ngowi, N. H. Mvungi

Abstract:

This paper critically examines the evolution of socio-technical systems theory, its practices, and challenges in system design and development. It examines concepts put forward by researchers focusing on the application of the theory in software engineering. There are various methods developed that use socio-technical concepts based on systems engineering without remarkable success. The main constraint is the large amount of data and inefficient techniques used in the application of the concepts in system engineering for developing time-bound systems and within a limited/controlled budget. This paper critically examines each of the methods, highlight bottlenecks and suggest the way forward. Since socio-technical systems theory only explains what to do, but not how doing it, hence engineers are not using the concept to save time, costs and reduce risks associated with new frameworks. Hence, a new framework, which can be considered as a practical approach is proposed that borrows concepts from soft systems method, agile systems development and object-oriented analysis and design to bridge the gap between theory and practice. The approach will enable the development of systems using socio-technical systems theory to attract/enable the system engineers/software developers to use socio-technical systems theory in building worthwhile information systems to avoid fragilities and hostilities in the work environment.

Keywords: Socio-technical systems, human centered design, software engineering, cognitive engineering, soft systems, systems engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2808
4426 Differences in Goal Scoring and Passing Sequences between Winning and Losing Team in UEFA-EURO Championship 2012

Authors: Muhamad S., Norasrudin S, Rahmat A.

Abstract:

The objective of current study is to investigate the differences of winning and losing teams in terms of goal scoring and passing sequences. Total of 31 matches from UEFA-EURO 2012 were analyzed and 5 matches were excluded from analysis due to matches end up drawn. There are two groups of variable used in the study which is; i. the goal scoring variable and: ii. passing sequences variable. Data were analyzed using Wilcoxon matched pair rank test with significant value set at p < 0.05. Current study found the timing of goal scored was significantly higher for winning team at 1st half (Z=-3.416, p=.001) and 2nd half (Z=-3.252, p=.001). The scoring frequency was also found to be increase as time progressed and the last 15 minutes of the game was the time interval the most goals scored. The indicators that were significantly differences between winning and losing team were the goal scored (Z=-4.578, p=.000), the head (Z=-2.500, p=.012), the right foot (Z=-3.788,p=.000), corner (Z=-.2.126,p=.033), open play (Z=-3.744,p=.000), inside the penalty box (Z=-4.174, p=.000) , attackers (Z=-2.976, p=.003) and also the midfielders (Z=-3.400, p=.001). Regarding the passing sequences, there are significance difference between both teams in short passing sequences (Z=-.4.141, p=.000). While for the long passing, there were no significance difference (Z=-.1.795, p=.073). The data gathered in present study can be used by the coaches to construct detailed training program based on their objectives.

Keywords: Football, goals scored, passing, timing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2834
4425 Economical and Technical Analysis of Urban Transit System Selection Using TOPSIS Method According to Constructional and Operational Aspects

Authors: Ali Abdi Kordani, Meysam Rooyintan, Sid Mohammad Boroomandrad

Abstract:

Nowadays, one the most important problems in megacities is public transportation and satisfying citizens from this system in order to decrease the traffic congestions and air pollution. Accordingly, to improve the transit passengers and increase the travel safety, new transportation systems such as Bus Rapid Transit (BRT), tram, and monorail have expanded that each one has different merits and demerits. That is why comparing different systems for a systematic selection of public transportation systems in a big city like Tehran, which has numerous problems in terms of traffic and pollution, is essential. In this paper, it is tried to investigate the advantages and feasibility of using monorail, tram and BRT systems, which are widely used in most of megacities in all over the world. In Tehran, by using SPSS statistical analysis software and TOPSIS method, these three modes are compared to each other and their results will be assessed. Experts, who are experienced in the transportation field, answer the prepared matrix questionnaire to select each public transportation mode (tram, monorail, and BRT). The results according to experts’ judgments represent that monorail has the first priority, Tram has the second one, and BRT has the third one according to the considered indices like execution costs, wasting time, depreciation, pollution, operation costs, travel time, passenger satisfaction, benefit to cost ratio and traffic congestion.

Keywords: Bus Rapid Transit, Costs, Monorail, Pollution, Tram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 663
4424 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

Authors: Saleem Z. Ramadan

Abstract:

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the Pth percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Keywords: Reliability, Accelerated life testing, Cumulative exposure model, Bayesian estimation, Progressive Type-I censoring, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2148
4423 Solar Radiation Time Series Prediction

Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs

Abstract:

A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.

Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2749
4422 Optimization of Ethanol Fermentation from Pineapple Peel Extract Using Response Surface Methodology (RSM)

Authors: Nadya Hajar, Zainal, S., Atikah, O., Tengku Elida, T. Z. M.

Abstract:

Ethanol has been known for a long time, being perhaps the oldest product obtained through traditional biotechnology fermentation. Agriculture waste as substrate in fermentation is vastly discussed as alternative to replace edible food and utilization of organic material. Pineapple peel, highly potential source as substrate is a by-product of the pineapple processing industry. Bio-ethanol from pineapple (Ananas comosus) peel extract was carried out by controlling fermentation without any treatment. Saccharomyces ellipsoides was used as inoculum in this fermentation process as it is naturally found at the pineapple skin. In this study, the capability of Response Surface Methodology (RSM) for optimization of ethanol production from pineapple peel extract using Saccharomyces ellipsoideus in batch fermentation process was investigated. Effect of five test variables in a defined range of inoculum concentration 6- 14% (v/v), pH (4.0-6.0), sugar concentration (14-22°Brix), temperature (24-32°C) and time of incubation (30-54 hrs) on the ethanol production were evaluated. Data obtained from experiment were analyzed with RSM of MINITAB Software (Version 15) whereby optimum ethanol concentration of 8.637% (v/v) was determined. The optimum condition of 14% (v/v) inoculum concentration, pH 6, 22°Brix, 26°C and 30hours of incubation. The significant regression equation or model at the 5% level with correlation value of 99.96% was also obtained.

Keywords: Bio-ethanol, pineapple peel extract, Response Surface Methodology (RSM), Saccharomyces ellipsoideus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6069
4421 Computational Modeling in Strategic Marketing

Authors: Petr Cernohorsky, Jan Voracek

Abstract:

Well-developed strategic marketing planning is the essential prerequisite for establishment of the right and unique competitive advantage. Typical market, however, is a heterogeneous and decentralized structure with natural involvement of individual or group subjectivity and irrationality. These features cannot be fully expressed with one-shot rigorous formal models based on, e.g. mathematics, statistics or empirical formulas. We present an innovative solution, extending the domain of agent based computational economics towards the concept of hybrid modeling in service provider and consumer market such as telecommunications. The behavior of the market is described by two classes of agents - consumer and service provider agents - whose internal dynamics are fundamentally different. Customers are rather free multi-state structures, adjusting behavior and preferences quickly in accordance with time and changing environment. Producers, on the contrary, are traditionally structured companies with comparable internal processes and specific managerial policies. Their business momentum is higher and immediate reaction possibilities limited. This limitation underlines importance of proper strategic planning as the main process advising managers in time whether to continue with more or less the same business or whether to consider the need for future structural changes that would ensure retention of existing customers or acquisition of new ones.

Keywords: Agent-based computational economics, hybrid modeling, strategic marketing, system dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
4420 Research Regarding Resistance Characteristics of Biscuits Assortment Using Cone Penetrometer

Authors: G.–A. Constantin, G. Voicu, E.–M. Stefan, P. Tudor, G. Paraschiv, M.–G. Munteanu

Abstract:

In the activity of handling and transport of food products, the products may be subjected to mechanical stresses that may lead to their deterioration by deformation, breaking, or crushing. This is the case for biscuits, regardless of their type (gluten-free or sugary), the addition of ingredients or flour from which they are made. However, gluten-free biscuits have a higher mechanical resistance to breakage or crushing compared to easily shattered sugar biscuits (especially those for children). The paper presents the results of the experimental evaluation of the texture for four varieties of commercial biscuits, using the penetrometer equipped with needle cone at five different additional weights on the cone-rod. The assortments of biscuits tested in the laboratory were Petit Beurre, Picnic, and Maia (all three manufactured by RoStar, Romania) and Sultani diet biscuits, manufactured by Eti Burcak Sultani (Turkey, in packs of 138 g). For the four varieties of biscuits and the five additional weights (50, 77, 100, 150 and 177 g), the experimental data obtained were subjected to regression analysis in the MS Office Excel program, using Velon's relationship (h = a∙ln(t) + b). The regression curves were analysed comparatively in order to identify possible differences and to highlight the variation of the penetration depth h, in relation to the time t. Based on the penetration depth between two-time intervals (every 5 seconds), the curves of variation of the penetration speed in relation to time were then drawn. It was found that Velon's law verifies the experimental data for all assortments of biscuits and for all five additional weights. The correlation coefficient R2 had in most of the analysed cases values over 0.850. The values recorded for the penetration depth were framed, in general, within 45-55 p.u. (penetrometric units) at an additional mass of 50 g, respectively between 155-168 p.u., at an additional mass of 177 g, at Petit Beurre biscuits. For Sultani diet biscuits, the values of the penetration depth were within the limits of 32-35 p.u., at an additional weight of 50 g and between 80-114 p.u., at an additional weight of 177g. The data presented in the paper can be used by both operators on the manufacturing technology flow, as well as by the traders of these food products, in order to establish the most efficient parametric of the working regimes (when packaging and handling).

Keywords: Biscuits resistance/texture, penetration depth, penetration velocity, sharp pin penetrometer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 605
4419 Addressing Scalability Issues of Named Entity Recognition Using Multi-Class Support Vector Machines

Authors: Mona Soliman Habib

Abstract:

This paper explores the scalability issues associated with solving the Named Entity Recognition (NER) problem using Support Vector Machines (SVM) and high-dimensional features. The performance results of a set of experiments conducted using binary and multi-class SVM with increasing training data sizes are examined. The NER domain chosen for these experiments is the biomedical publications domain, especially selected due to its importance and inherent challenges. A simple machine learning approach is used that eliminates prior language knowledge such as part-of-speech or noun phrase tagging thereby allowing for its applicability across languages. No domain-specific knowledge is included. The accuracy measures achieved are comparable to those obtained using more complex approaches, which constitutes a motivation to investigate ways to improve the scalability of multiclass SVM in order to make the solution more practical and useable. Improving training time of multi-class SVM would make support vector machines a more viable and practical machine learning solution for real-world problems with large datasets. An initial prototype results in great improvement of the training time at the expense of memory requirements.

Keywords: Named entity recognition, support vector machines, language independence, bioinformatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682
4418 Purity Monitor Studies in Medium Liquid Argon TPC

Authors: I. Badhrees

Abstract:

This paper is an attempt to describe some of the results that had been found through a journey of study in the field of particle physics. This study consists of two parts, one about the measurement of the cross section of the decay of the Z particle in two electrons, and the other deals with the measurement of the cross section of the multi-photon absorption process using a beam of Laser in the Liquid Argon Time Projection Chamber.

The first part of the paper concerns the results based on the analysis of a data sample containing 8120 ee candidates to reconstruct the mass of the Z particle for each event where each event has an ee pair with PT(e) > 20GeV, and η(e) < 2.5. Monte Carlo templates of the reconstructed Z particle were produced as a function of the Z mass scale. The distribution of the reconstructed Z mass in the data was compared to the Monte Carlo templates, where the total cross section is calculated to be equal to 1432pb.

The second part concerns the Liquid Argon Time Projection Chamber, LAr TPC, the results of the interaction of the UV Laser, Nd-YAG with λ= 266mm, with LAr and through the study of the multi-photon ionization process as a part of the R&D at Bern University. The main result of this study was the cross section of the process of the multi-photon ionization process of the LAr, σe = 1.24±0.10stat±0.30sys.10 -56cm4.

Keywords: ATLAS, CERN, KACST, LArTPC, Particle Physics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698
4417 Efficient Program Slicing Algorithms for Measuring Functional Cohesion and Parallelism

Authors: Jehad Al Dallal

Abstract:

Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.

Keywords: Backward slicing, cohesion measure, forward slicing, parallelism measure, program dependence graph, program slicing, static slicing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438
4416 An FPGA Implementation of Intelligent Visual Based Fall Detection

Authors: Peng Shen Ong, Yoong Choon Chang, Chee Pun Ooi, Ettikan K. Karuppiah, Shahirina Mohd Tahir

Abstract:

Falling has been one of the major concerns and threats to the independence of the elderly in their daily lives. With the worldwide significant growth of the aging population, it is essential to have a promising solution of fall detection which is able to operate at high accuracy in real-time and supports large scale implementation using multiple cameras. Field Programmable Gate Array (FPGA) is a highly promising tool to be used as a hardware accelerator in many emerging embedded vision based system. Thus, it is the main objective of this paper to present an FPGA-based solution of visual based fall detection to meet stringent real-time requirements with high accuracy. The hardware architecture of visual based fall detection which utilizes the pixel locality to reduce memory accesses is proposed. By exploiting the parallel and pipeline architecture of FPGA, our hardware implementation of visual based fall detection using FGPA is able to achieve a performance of 60fps for a series of video analytical functions at VGA resolutions (640x480). The results of this work show that FPGA has great potentials and impacts in enabling large scale vision system in the future healthcare industry due to its flexibility and scalability.

Keywords: Fall detection, FPGA, hardware implementation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2456
4415 Nonlinear Transformation of Laser Generated Ultrasonic Pulses in Geomaterials

Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas

Abstract:

Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus “GEOSCAN-02M”. Ultrasonic pulses are excited by the pulses of Qswitched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach- Mayergoyz and can be used for the location of cracks in the optically opaque materials.

Keywords: Cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1885
4414 An Experimental Study on the Optimum Installation of Fire Detector for Early Stage Fire Detecting in Rack-Type Warehouses

Authors: Ki Ok Choi, Sung Ho Hong, Dong Suck Kim, Don Mook Choi

Abstract:

Rack type warehouses are different from general buildings in the kinds, amount, and arrangement of stored goods, so the fire risk of rack type warehouses is different from those buildings. The fire pattern of rack type warehouses is different in combustion characteristic and storing condition of stored goods. The initial fire burning rate is different in the surface condition of materials, but the running time of fire is closely related with the kinds of stored materials and stored conditions. The stored goods of the warehouse are consisted of diverse combustibles, combustible liquid, and so on. Fire detection time may be delayed because the residents are less than office and commercial buildings. If fire detectors installed in rack type warehouses are inadaptable, the fire of the warehouse may be the great fire because of delaying of fire detection. In this paper, we studied what kinds of fire detectors are optimized in early detecting of rack type warehouse fire by real-scale fire tests. The fire detectors used in the tests are rate of rise type, fixed type, photo electric type, and aspirating type detectors. We considered optimum fire detecting method in rack type warehouses suggested by the response characteristic and comparative analysis of the fire detectors.

Keywords: Fire detector, rack, response characteristic, warehouse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970
4413 A Neural Network Control for Voltage Balancing in Three-Phase Electric Power System

Authors: Dana M. Ragab, Jasim A. Ghaeb

Abstract:

The three-phase power system suffers from different challenging problems, e.g. voltage unbalance conditions at the load side. The voltage unbalance usually degrades the power quality of the electric power system. Several techniques can be considered for load balancing including load reconfiguration, static synchronous compensator and static reactive power compensator. In this work an efficient neural network is designed to control the unbalanced condition in the Aqaba-Qatrana-South Amman (AQSA) electric power system. It is designed for highly enhanced response time of the reactive compensator for voltage balancing. The neural network is developed to determine the appropriate set of firing angles required for the thyristor-controlled reactor to balance the three load voltages accurately and quickly. The parameters of AQSA power system are considered in the laboratory model, and several test cases have been conducted to test and validate the proposed technique capabilities. The results have shown a high performance of the proposed Neural Network Control (NNC) technique for correcting the voltage unbalance conditions at three-phase load based on accuracy and response time.

Keywords: Three-phase power system, reactive power control, voltage unbalance factor, neural network, power quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 983
4412 Data Recording for Remote Monitoring of Autonomous Vehicles

Authors: Rong-Terng Juang

Abstract:

Autonomous vehicles offer the possibility of significant benefits to social welfare. However, fully automated cars might not be going to happen in the near further. To speed the adoption of the self-driving technologies, many governments worldwide are passing laws requiring data recorders for the testing of autonomous vehicles. Currently, the self-driving vehicle, (e.g., shuttle bus) has to be monitored from a remote control center. When an autonomous vehicle encounters an unexpected driving environment, such as road construction or an obstruction, it should request assistance from a remote operator. Nevertheless, large amounts of data, including images, radar and lidar data, etc., have to be transmitted from the vehicle to the remote center. Therefore, this paper proposes a data compression method of in-vehicle networks for remote monitoring of autonomous vehicles. Firstly, the time-series data are rearranged into a multi-dimensional signal space. Upon the arrival, for controller area networks (CAN), the new data are mapped onto a time-data two-dimensional space associated with the specific CAN identity. Secondly, the data are sampled based on differential sampling. Finally, the whole set of data are encoded using existing algorithms such as Huffman, arithmetic and codebook encoding methods. To evaluate system performance, the proposed method was deployed on an in-house built autonomous vehicle. The testing results show that the amount of data can be reduced as much as 1/7 compared to the raw data.

Keywords: Autonomous vehicle, data recording, remote monitoring, controller area network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1338
4411 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.

Keywords: Wavelet transform, computational error, computational duration, strong ground motion data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1362
4410 Multi-Factor Optimization Method through Machine Learning in Building Envelope Design: Focusing on Perforated Metal Façade

Authors: Jinwooung Kim, Jae-Hwan Jung, Seong-Jun Kim, Sung-Ah Kim

Abstract:

Because the building envelope has a significant impact on the operation and maintenance stage of the building, designing the facade considering the performance can improve the performance of the building and lower the maintenance cost of the building. In general, however, optimizing two or more performance factors confronts the limits of time and computational tools. The optimization phase typically repeats infinitely until a series of processes that generate alternatives and analyze the generated alternatives achieve the desired performance. In particular, as complex geometry or precision increases, computational resources and time are prohibitive to find the required performance, so an optimization methodology is needed to deal with this. Instead of directly analyzing all the alternatives in the optimization process, applying experimental techniques (heuristic method) learned through experimentation and experience can reduce resource waste. This study proposes and verifies a method to optimize the double envelope of a building composed of a perforated panel using machine learning to the design geometry and quantitative performance. The proposed method is to achieve the required performance with fewer resources by supplementing the existing method which cannot calculate the complex shape of the perforated panel.

Keywords: Building envelope, machine learning, perforated metal, multi-factor optimization, façade.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1207
4409 Suppression of Narrowband Interference in Impulse Radio Based High Data Rate UWB WPAN Communication System Using NLOS Channel Model

Authors: Bikramaditya Das, Susmita Das

Abstract:

Study on suppression of interference in time domain equalizers is attempted for high data rate impulse radio (IR) ultra wideband communication system. The narrow band systems may cause interference with UWB devices as it is having very low transmission power and the large bandwidth. SRAKE receiver improves system performance by equalizing signals from different paths. This enables the use of SRAKE receiver techniques in IRUWB systems. But Rake receiver alone fails to suppress narrowband interference (NBI). A hybrid SRake-MMSE time domain equalizer is proposed to overcome this by taking into account both the effect of the number of rake fingers and equalizer taps. It also combats intersymbol interference. A semi analytical approach and Monte-Carlo simulation are used to investigate the BER performance of SRAKEMMSE receiver on IEEE 802.15.3a UWB channel models. Study on non-line of sight indoor channel models (both CM3 and CM4) illustrates that bit error rate performance of SRake-MMSE receiver with NBI performs better than that of Rake receiver without NBI. We show that for a MMSE equalizer operating at high SNR-s the number of equalizer taps plays a more significant role in suppressing interference.

Keywords: IR-UWB, UWB, IEEE 802.15.3a, NBI, data rate, bit error rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
4408 Two Scenarios for Ultra-Light Overhead Conveyor System in Logistics Applications

Authors: Batin Latif Aylak, Bernd Noche

Abstract:

Overhead conveyor systems are in use in many installations around the world, meeting the widest range of applications possible. Overhead conveyor systems are particularly preferred in automotive industry but also at post offices. Overhead conveyor systems must always be integrated with a logistical process by finding the best way for a cheaper material flow in order to guarantee precise and fast workflows. With their help, any transport can take place without wasting ground and space, without excessive company capacity, lost or damaged products, erroneous delivery, endless travels and without wasting time. Ultra-light overhead conveyor systems are rope-based conveying systems with individually driven vehicles. The vehicles can move automatically on the rope and this can be realized by energy and signals. Crossings are realized by switches. Ultra-light overhead conveyor systems provide optimal material flow, which produces profit and saves time. This article introduces two new ultra-light overhead conveyor designs in logistics and explains their components. According to the explanation of the components, scenarios are created by means of their technical characteristics. The scenarios are visualized with the help of CAD software. After that, assumptions are made for application area. According to these assumptions scenarios are visualized. These scenarios help logistics companies achieve lower development costs as well as quicker market maturity.

Keywords: Logistics, material flow, overhead conveyor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979
4407 Jeffrey's Prior for Unknown Sinusoidal Noise Model via Cramer-Rao Lower Bound

Authors: Samuel A. Phillips, Emmanuel A. Ayanlowo, Rasaki O. Olanrewaju, Olayode Fatoki

Abstract:

This paper employs the Jeffrey's prior technique in the process of estimating the periodograms and frequency of sinusoidal model for unknown noisy time variants or oscillating events (data) in a Bayesian setting. The non-informative Jeffrey's prior was adopted for the posterior trigonometric function of the sinusoidal model such that Cramer-Rao Lower Bound (CRLB) inference was used in carving-out the minimum variance needed to curb the invariance structure effect for unknown noisy time observational and repeated circular patterns. An average monthly oscillating temperature series measured in degree Celsius (0C) from 1901 to 2014 was subjected to the posterior solution of the unknown noisy events of the sinusoidal model via Markov Chain Monte Carlo (MCMC). It was not only deduced that two minutes period is required before completing a cycle of changing temperature from one particular degree Celsius to another but also that the sinusoidal model via the CRLB-Jeffrey's prior for unknown noisy events produced a miniature posterior Maximum A Posteriori (MAP) compare to a known noisy events.

Keywords: Cramer-Rao Lower Bound (CRLB), Jeffrey's prior, Sinusoidal, Maximum A Posteriori (MAP), Markov Chain Monte Carlo (MCMC), Periodograms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 646
4406 Comparative Parametric Analysis on the Dynamic Response of Fibre Composite Beams with Debonding

Authors: Indunil Jayatilake, Warna Karunasena

Abstract:

Fiber Reinforced Polymer (FRP) composites enjoy an array of applications ranging from aerospace, marine and military to automobile, recreational and civil industry due to their outstanding properties. A structural glass fiber reinforced polymer (GFRP) composite sandwich panel made from E-glass fiber skin and a modified phenolic core has been manufactured in Australia for civil engineering applications. One of the major mechanisms of damage in FRP composites is skin-core debonding. The presence of debonding is of great concern not only because it severely affects the strength but also it modifies the dynamic characteristics of the structure, including natural frequency and vibration modes. This paper deals with the investigation of the dynamic characteristics of a GFRP beam with single and multiple debonding by finite element based numerical simulations and analyses using the STRAND7 finite element (FE) software package. Three-dimensional computer models have been developed and numerical simulations were done to assess the dynamic behavior. The FE model developed has been validated with published experimental, analytical and numerical results for fully bonded as well as debonded beams. A comparative analysis is carried out based on a comprehensive parametric investigation. It is observed that the reduction in natural frequency is more affected by single debonding than the equally sized multiple debonding regions located symmetrically to the single debonding position. Thus it is revealed that a large single debonding area leads to more damage in terms of natural frequency reduction than isolated small debonding zones of equivalent area, appearing in the GFRP beam. Furthermore, the extents of natural frequency shifts seem mode-dependent and do not seem to have a monotonous trend of increasing with the mode numbers.

Keywords: Debonding, dynamic response, finite element modelling, FRP beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 511