Search results for: and processing time.
4467 Technology for Enhancing the Learning and Teaching Experience in Higher Education
Authors: Sara M. Ismael, Ali H. Al-Badi
Abstract:
The rapid development and growth of technology has changed the method of obtaining information for educators and learners. Technology has created a new world of collaboration and communication among people. Incorporating new technology into the teaching process can enhance learning outcomes. Billions of individuals across the world are now connected together, and are cooperating and contributing their knowledge and intelligence. Time is no longer wasted in waiting until the teacher is ready to share information as learners can go online and get it immediatelt.
The objectives of this paper are to understand the reasons why changes in teaching and learning methods are necessary, to find ways of improving them, and to investigate the challenges that present themselves in the adoption of new ICT tools in higher education institutes.
To achieve these objectives two primary research methods were used: questionnaires, which were distributed among students at higher educational institutes and multiple interviews with faculty members (teachers) from different colleges and universities, which were conducted to find out why teaching and learning methodology should change.
The findings show that both learners and educators agree that educational technology plays a significant role in enhancing instructors’ teaching style and students’ overall learning experience; however, time constraints, privacy issues, and not being provided with enough up-to-date technology do create some challenges.
Keywords: E-books, educational technology, educators, e-learning, learners, social media, Web 2.0, LMS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23314466 PhilSHORE: Development of a WebGIS-Based Marine Spatial Planning Tool for Tidal Current Energy Resource Assessment and Site Suitability Analysis
Authors: Ma. Rosario Concepcion O. Ang, Luis Caezar Ian K. Panganiban, Charmyne B. Mamador, Oliver Dan G. De Luna, Michael D. Bausas, Joselito P. Cruz
Abstract:
PhilSHORE is a multi-site, multi-device and multicriteria decision support tool designed to support the development of tidal current energy in the Philippines. Its platform is based on Geographic Information Systems (GIS) which allows for the collection, storage, processing, analyses and display of geospatial data. Combining GIS tools with open source web development applications, PhilSHORE becomes a webGIS-based marine spatial planning tool. To date, PhilSHORE displays output maps and graphs of power and energy density, site suitability and site-device analysis. It enables stakeholders and the public easy access to the results of tidal current energy resource assessments and site suitability analyses. Results of the initial development show that PhilSHORE is a promising decision support tool for ORE project developments.Keywords: GIS, Site Suitability Analysis, Tidal Current Energy Resource Assessment, WebGIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27244465 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.
Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13384464 Assessment of Energy Use and Energy Efficiency in Two Portuguese Slaughterhouses
Authors: M. Feliciano, F. Rodrigues, A. Gonçalves, J. M. R. C. A. Santos, V. Leite
Abstract:
With the objective of characterizing the profile and performance of energy use by slaughterhouses, surveys and audits were performed in two different facilities located in the northeastern region of Portugal. Energy consumption from multiple energy sources was assessed monthly, along with production and costs, for the same reference year. Gathered data was analyzed to identify and quantify the main consuming processes and to estimate energy efficiency indicators for benchmarking purposes. Main results show differences between the two slaughterhouses concerning energy sources, consumption by source and sector, and global energy efficiency. Electricity is the most used source in both slaughterhouses with a contribution of around 50%, being essentially used for meat processing and refrigeration. Natural gas, in slaughterhouse A, and pellets, in slaughterhouse B, used for heating water take the second place, with a mean contribution of about 45%. On average, a 62 kgoe/t specific energy consumption (SEC) was found, although with differences between slaughterhouses. A prominent negative correlation between SEC and carcass production was found specially in slaughterhouse A. Estimated Specific Energy Cost and Greenhouse Gases Intensity (GHGI) show mean values of about 50 €/t and 1.8 tCO2e/toe, respectively. Main results show that there is a significant margin for improving energy efficiency and therefore lowering costs in this type of non-energy intensive industries.
Keywords: Meat industry, energy intensity, energy efficiency, GHG emissions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37414463 Performance of Compound Enhancement Algorithms on Dental Radiograph Images
Authors: S.A.Ahmad, M.N.Taib, N.E.A.Khalid, R.Ahmad, H.Taib
Abstract:
The purpose of this research is to compare the original intra-oral digital dental radiograph images with images that are enhanced using a combination of image processing algorithms. Intraoral digital dental radiograph images are often noisy, blur edges and low in contrast. A combination of sharpening and enhancement method are used to overcome these problems. Three types of proposed compound algorithms used are Sharp Adaptive Histogram Equalization (SAHE), Sharp Median Adaptive Histogram Equalization (SMAHE) and Sharp Contrast adaptive histogram equalization (SCLAHE). This paper presents an initial study of the perception of six dentists on the details of abnormal pathologies and improvement of image quality in ten intra-oral radiographs. The research focus on the detection of only three types of pathology which is periapical radiolucency, widen periodontal ligament space and loss of lamina dura. The overall result shows that SCLAHE-s slightly improve the appearance of dental abnormalities- over the original image and also outperform the other two proposed compound algorithms.Keywords: intra-oral dental radiograph, histogram equalization, sharpening, CLAHE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17864462 Monte Carlo and Biophysics Analysis in a Criminal Trial
Authors: Luca Indovina, Carmela Coppola, Carlo Altucci, Riccardo Barberi, Rocco Romano
Abstract:
In this paper a real court case, held in Italy at the Court of Nola, in which a correct physical description, conducted with both a Monte Carlo and biophysical analysis, would have been sufficient to arrive at conclusions confirmed by documentary evidence, is considered. This will be an example of how forensic physics can be useful in confirming documentary evidence in order to reach hardly questionable conclusions. This was a libel trial in which the defendant, Mr. DS (Defendant for Slander), had falsely accused one of his neighbors, Mr. OP (Offended Person), of having caused him some damages. The damages would have been caused by an external plaster piece that would have detached from the neighbor’s property and would have hit Mr DS while he was in his garden, much more than a meter far away from the facade of the building from which the plaster piece would have detached. In the trial, Mr. DS claimed to have suffered a scratch on his forehead, but he never showed the plaster that had hit him, nor was able to tell from where the plaster would have arrived. Furthermore, Mr. DS presented a medical certificate with a diagnosis of contusion of the cerebral cortex. On the contrary, the images of Mr. OP’s security cameras do not show any movement in the garden of Mr. DS in a long interval of time (about 2 hours) around the time of the alleged accident, nor do they show any people entering or coming out from the house of Mr. DS in the same interval of time. Biophysical analysis shows that both the diagnosis of the medical certificate and the wound declared by the defendant, already in conflict with each other, are not compatible with the fall of external plaster pieces too small to be found. The wind was at a level 1 of the Beaufort scale, that is, unable to raise even dust (level 4 of the Beaufort scale). Therefore, the motion of the plaster pieces can be described as a projectile motion, whereas collisions with the building cornice can be treated using Newtons law of coefficients of restitution. Numerous numerical Monte Carlo simulations show that the pieces of plaster would not have been able to reach even the garden of Mr. DS, let alone a distance over 1.30 meters. Results agree with the documentary evidence (images of Mr. OP’s security cameras) that Mr. DS could not have been hit by plaster pieces coming from Mr. OP’s property.Keywords: Biophysical analysis, Monte Carlo simulations, Newton’s law of restitution, projectile motion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6274461 Centralized Monitoring and Self-protected against Fiber Fault in FTTH Access Network
Authors: Mohammad Syuhaimi Ab-Rahman, Boonchuan Ng, Kasmiran Jumari
Abstract:
This paper presented a new approach for centralized monitoring and self-protected against fiber fault in fiber-to-the-home (FTTH) access network by using Smart Access Network Testing, Analyzing and Database (SANTAD). SANTAD will be installed with optical line terminal (OLT) at central office (CO) for in-service transmission surveillance and fiber fault localization within FTTH with point-to-multipoint (P2MP) configuration downwardly from CO towards customer residential locations based on the graphical user interface (GUI) processing capabilities of MATLAB software. SANTAD is able to detect any fiber fault as well as identify the failure location in the network system. SANTAD enable the status of each optical network unit (ONU) connected line is displayed onto one screen with capability to configure the attenuation and detect the failure simultaneously. The analysis results and information will be delivered to the field engineer for promptly actions, meanwhile the failure line will be diverted to protection line to ensure the traffic flow continuously. This approach has a bright prospect to improve the survivability and reliability as well as increase the efficiency and monitoring capabilities in FTTH.Keywords: Fiber fault, FTTH, SANTAD, transmission surveillance, MATLAB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25614460 Semantic Modeling of Management Information: Enabling Automatic Reasoning on DMTF-CIM
Authors: Fernando Alonso, Rafael Fernandez, Sonia Frutos, Javier Soriano
Abstract:
CIM is the standard formalism for modeling management information developed by the Distributed Management Task Force (DMTF) in the context of its WBEM proposal, designed to provide a conceptual view of the managed environment. In this paper, we propose the inclusion of formal knowledge representation techniques, based on Description Logics (DLs) and the Web Ontology Language (OWL), in CIM-based conceptual modeling, and then we examine the benefits of such a decision. The proposal is specified as a CIM metamodel level mapping to a highly expressive subset of DLs capable of capturing all the semantics of the models. The paper shows how the proposed mapping can be used for automatic reasoning about the management information models, as a design aid, by means of new-generation CASE tools, thanks to the use of state-of-the-art automatic reasoning systems that support the proposed logic and use algorithms that are sound and complete with respect to the semantics. Such a CASE tool framework has been developed by the authors and its architecture is also introduced. The proposed formalization is not only useful at design time, but also at run time through the use of rational autonomous agents, in response to a need recently recognized by the DMTF.Keywords: CIM, Knowledge-based Information Models, Ontology Languages, OWL, Description Logics, Integrated Network Management, Intelligent Agents, Automatic Reasoning Techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17364459 Efficient High Fidelity Signal Reconstruction Based on Level Crossing Sampling
Authors: Negar Riazifar, Nigel G. Stocks
Abstract:
This paper proposes strategies in level crossing (LC) sampling and reconstruction that provide high fidelity signal reconstruction for speech signals; these strategies circumvent the problem of exponentially increasing number of samples as the bit-depth is increased and hence are highly efficient. Specifically, the results indicate that the distribution of the intervals between samples is one of the key factors in the quality of signal reconstruction; including samples with short intervals does not improve the accuracy of the signal reconstruction, whilst samples with large intervals lead to numerical instability. The proposed sampling method, termed reduced conventional level crossing (RCLC) sampling, exploits redundancy between samples to improve the efficiency of the sampling without compromising performance. A reconstruction technique is also proposed that enhances the numerical stability through linear interpolation of samples separated by large intervals. Interpolation is demonstrated to improve the accuracy of the signal reconstruction in addition to the numerical stability. We further demonstrate that the RCLC and interpolation methods can give useful levels of signal recovery even if the average sampling rate is less than the Nyquist rate.
Keywords: Level crossing sampling, numerical stability, speech processing, trigonometric polynomial.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4474458 Development and Characterization of Wheat Bread with Lupin Flour
Authors: Paula M. R. Correia, Marta Gonzaga, Luis M. Batista, Luísa Beirão-Costa, Raquel F. P. Guiné
Abstract:
The purpose of the present work was to develop an innovative food product with good textural and sensorial characteristics. The product, a new type of bread, was prepared with wheat (90%) and lupin (10%) flours, without the addition of any conservatives. Several experiences were also done to find the most appropriate proportion of lupin flour. The optimized product was characterized considering the rheological, physical-chemical and sensorial properties. The water absorption of wheat flour with 10% of lupin was higher than that of the normal wheat flours, and Wheat Ceres flour presented the lower value, with lower dough development time and high stability time. The breads presented low moisture but a considerable water activity. The density of bread decreased with the introduction of lupin flour. The breads were quite white, and during storage the colour parameters decreased. The lupin flour clearly increased the number of alveolus, but the total area increased significantly just for the Wheat Cerealis bread. The addition of lupin flour increased the hardness and chewiness of breads, but the elasticity did not vary significantly. Lupin bread was sensorially similar to wheat bread produced with WCerealis flour, and the main differences are the crust rugosity, colour and alveolus characteristics.
Keywords: Lupin flour, physical-chemical properties, sensorial analysis, wheat flour.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25564457 Anonymous Editing Prevention Technique Using Gradient Method for High-Quality Video
Authors: Jiwon Lee, Chanho Jung, Si-Hwan Jang, Kyung-Ill Kim, Sanghyun Joo, Wook-Ho Son
Abstract:
Since the advances in digital imaging technologies have led to development of high quality digital devices, there are a lot of illegal copies of copyrighted video content on the Internet. Also, unauthorized editing is occurred frequently. Thus, we propose an editing prevention technique for high-quality (HQ) video that can prevent these illegally edited copies from spreading out. The proposed technique is applied spatial and temporal gradient methods to improve the fidelity and detection performance. Also, the scheme duplicates the embedding signal temporally to alleviate the signal reduction caused by geometric and signal-processing distortions. Experimental results show that the proposed scheme achieves better performance than previously proposed schemes and it has high fidelity. The proposed scheme can be used in unauthorized access prevention method of visual communication or traitor tracking applications which need fast detection process to prevent illegally edited video content from spreading out.Keywords: Editing prevention technique, gradient method, high-quality video, luminance change, visual communication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19384456 Improving Packet Latency of Video Sensor Networks
Authors: Arijit Ghosh, Tony Givargis
Abstract:
Video sensor networks operate on stringent requirements of latency. Packets have a deadline within which they have to be delivered. Violation of the deadline causes a packet to be treated as lost and the loss of packets ultimately affects the quality of the application. Network latency is typically a function of many interacting components. In this paper, we propose ways of reducing the forwarding latency of a packet at intermediate nodes. The forwarding latency is caused by a combination of processing delay and queueing delay. The former is incurred in order to determine the next hop in dynamic routing. We show that unless link failures in a very specific and unlikely pattern, a vast majority of these lookups are redundant. To counter this we propose source routing as the routing strategy. However, source routing suffers from issues related to scalability and being impervious to network dynamics. We propose solutions to counter these and show that source routing is definitely a viable option in practical sized video networks. We also propose a fast and fair packet scheduling algorithm that reduces queueing delay at the nodes. We support our claims through extensive simulation on realistic topologies with practical traffic loads and failure patterns.Keywords: Sensor networks, Packet latency, Network design, Networkperformance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15614455 Automatic Extraction of Roads from High Resolution Aerial and Satellite Images with Heavy Noise
Authors: Yan Li, Ronald Briggs
Abstract:
Aerial and satellite images are information rich. They are also complex to analyze. For GIS systems, many features require fast and reliable extraction of roads and intersections. In this paper, we study efficient and reliable automatic extraction algorithms to address some difficult issues that are commonly seen in high resolution aerial and satellite images, nonetheless not well addressed in existing solutions, such as blurring, broken or missing road boundaries, lack of road profiles, heavy shadows, and interfering surrounding objects. The new scheme is based on a new method, namely reference circle, to properly identify the pixels that belong to the same road and use this information to recover the whole road network. This feature is invariable to the shape and direction of roads and tolerates heavy noise and disturbances. Road extraction based on reference circles is much more noise tolerant and flexible than the previous edge-detection based algorithms. The scheme is able to extract roads reliably from images with complex contents and heavy obstructions, such as the high resolution aerial/satellite images available from Google maps.
Keywords: Automatic road extraction, Image processing, Feature extraction, GIS update, Remote sensing, Geo-referencing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17074454 Tibyan Automated Arabic Correction Using Machine-Learning in Detecting Syntactical Mistakes
Authors: Ashwag O. Maghraby, Nida N. Khan, Hosnia A. Ahmed, Ghufran N. Brohi, Hind F. Assouli, Jawaher S. Melibari
Abstract:
The Arabic language is one of the most important languages. Learning it is so important for many people around the world because of its religious and economic importance and the real challenge lies in practicing it without grammatical or syntactical mistakes. This research focused on detecting and correcting the syntactic mistakes of Arabic syntax according to their position in the sentence and focused on two of the main syntactical rules in Arabic: Dual and Plural. It analyzes each sentence in the text, using Stanford CoreNLP morphological analyzer and machine-learning approach in order to detect the syntactical mistakes and then correct it. A prototype of the proposed system was implemented and evaluated. It uses support vector machine (SVM) algorithm to detect Arabic grammatical errors and correct them using the rule-based approach. The prototype system has a far accuracy 81%. In general, it shows a set of useful grammatical suggestions that the user may forget about while writing due to lack of familiarity with grammar or as a result of the speed of writing such as alerting the user when using a plural term to indicate one person.
Keywords: Arabic Language acquisition and learning, natural language processing, morphological analyzer, part-of-speech.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10604453 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with a multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) decompilation of the video stream into individual frames; (2) establishing the interior camera orientation parameters; (3) determining the relative orientation parameters for each video frame with respect to each other; (4) finding the absolute orientation parameters, using a self-calibration bundle and adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a mosaic image of the test area, which is then merged with a well referenced existing digital map for the purpose of geo-referencing and aerial surveillance. A test field located in Abuja, Nigeria was used to evaluate our method. Video and telemetry data were collected for about fifteen minutes, and they were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images is more accurate when compared with those from original perspective images when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 metres.Keywords: Geo-referencing, ortho-rectification, video frame, self-calibration, UAV, target tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16224452 Mixed Model Assembly Line Sequencing In Make to Order System with Available to Promise Consideration
Authors: N. Manavizadeh, A. Dehghani, M. Rabbani
Abstract:
Mixed model assembly lines (MMAL) are a type of production line where a variety of product models similar in product characteristics are assembled. The effective design of these lines requires that schedule for assembling the different products is determined. In this paper we tried to fit the sequencing problem with the main characteristics of make to order (MTO) environment. The problem solved in this paper is a multiple objective sequencing problem in mixed model assembly lines sequencing using weighted Sum Method (WSM) using GAMS software for small problem and an effective GA for large scale problems because of the nature of NP-hardness of our problem and vast time consume to find the optimum solution in large problems. In this problem three practically important objectives are minimizing: total utility work, keeping a constant production rate variation, and minimizing earliness and tardiness cost which consider the priority of each customer and different due date which is a real situation in mixed model assembly lines and it is the first time we consider different attribute to prioritize the customers which help the company to reduce the cost of earliness and tardiness. This mechanism is a way to apply an advance available to promise (ATP) in mixed model assembly line sequencing which is the main contribution of this paper.Keywords: Available to promise, Earliness & Tardiness, GA, Mixed-Model assembly line Sequencing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25414451 Socio-Technical Systems: Transforming Theory into Practice
Authors: L. Ngowi, N. H. Mvungi
Abstract:
This paper critically examines the evolution of socio-technical systems theory, its practices, and challenges in system design and development. It examines concepts put forward by researchers focusing on the application of the theory in software engineering. There are various methods developed that use socio-technical concepts based on systems engineering without remarkable success. The main constraint is the large amount of data and inefficient techniques used in the application of the concepts in system engineering for developing time-bound systems and within a limited/controlled budget. This paper critically examines each of the methods, highlight bottlenecks and suggest the way forward. Since socio-technical systems theory only explains what to do, but not how doing it, hence engineers are not using the concept to save time, costs and reduce risks associated with new frameworks. Hence, a new framework, which can be considered as a practical approach is proposed that borrows concepts from soft systems method, agile systems development and object-oriented analysis and design to bridge the gap between theory and practice. The approach will enable the development of systems using socio-technical systems theory to attract/enable the system engineers/software developers to use socio-technical systems theory in building worthwhile information systems to avoid fragilities and hostilities in the work environment.
Keywords: Socio-technical systems, human centered design, software engineering, cognitive engineering, soft systems, systems engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28474450 Differences in Goal Scoring and Passing Sequences between Winning and Losing Team in UEFA-EURO Championship 2012
Authors: Muhamad S., Norasrudin S, Rahmat A.
Abstract:
The objective of current study is to investigate the differences of winning and losing teams in terms of goal scoring and passing sequences. Total of 31 matches from UEFA-EURO 2012 were analyzed and 5 matches were excluded from analysis due to matches end up drawn. There are two groups of variable used in the study which is; i. the goal scoring variable and: ii. passing sequences variable. Data were analyzed using Wilcoxon matched pair rank test with significant value set at p < 0.05. Current study found the timing of goal scored was significantly higher for winning team at 1st half (Z=-3.416, p=.001) and 2nd half (Z=-3.252, p=.001). The scoring frequency was also found to be increase as time progressed and the last 15 minutes of the game was the time interval the most goals scored. The indicators that were significantly differences between winning and losing team were the goal scored (Z=-4.578, p=.000), the head (Z=-2.500, p=.012), the right foot (Z=-3.788,p=.000), corner (Z=-.2.126,p=.033), open play (Z=-3.744,p=.000), inside the penalty box (Z=-4.174, p=.000) , attackers (Z=-2.976, p=.003) and also the midfielders (Z=-3.400, p=.001). Regarding the passing sequences, there are significance difference between both teams in short passing sequences (Z=-.4.141, p=.000). While for the long passing, there were no significance difference (Z=-.1.795, p=.073). The data gathered in present study can be used by the coaches to construct detailed training program based on their objectives.Keywords: Football, goals scored, passing, timing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28644449 Extraction of Semantic Digital Signatures from MRI Photos for Image-Identification Purposes
Authors: Marios Poulos, George Bokos
Abstract:
This paper makes an attempt to solve the problem of searching and retrieving of similar MRI photos via Internet services using morphological features which are sourced via the original image. This study is aiming to be considered as an additional tool of searching and retrieve methods. Until now the main way of the searching mechanism is based on the syntactic way using keywords. The technique it proposes aims to serve the new requirements of libraries. One of these is the development of computational tools for the control and preservation of the intellectual property of digital objects, and especially of digital images. For this purpose, this paper proposes the use of a serial number extracted by using a previously tested semantic properties method. This method, with its center being the multi-layers of a set of arithmetic points, assures the following two properties: the uniqueness of the final extracted number and the semantic dependence of this number on the image used as the method-s input. The major advantage of this method is that it can control the authentication of a published image or its partial modification to a reliable degree. Also, it acquires the better of the known Hash functions that the digital signature schemes use and produces alphanumeric strings for cases of authentication checking, and the degree of similarity between an unknown image and an original image.Keywords: Computational Geometry, MRI photos, Image processing, pattern Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15284448 Economical and Technical Analysis of Urban Transit System Selection Using TOPSIS Method According to Constructional and Operational Aspects
Authors: Ali Abdi Kordani, Meysam Rooyintan, Sid Mohammad Boroomandrad
Abstract:
Nowadays, one the most important problems in megacities is public transportation and satisfying citizens from this system in order to decrease the traffic congestions and air pollution. Accordingly, to improve the transit passengers and increase the travel safety, new transportation systems such as Bus Rapid Transit (BRT), tram, and monorail have expanded that each one has different merits and demerits. That is why comparing different systems for a systematic selection of public transportation systems in a big city like Tehran, which has numerous problems in terms of traffic and pollution, is essential. In this paper, it is tried to investigate the advantages and feasibility of using monorail, tram and BRT systems, which are widely used in most of megacities in all over the world. In Tehran, by using SPSS statistical analysis software and TOPSIS method, these three modes are compared to each other and their results will be assessed. Experts, who are experienced in the transportation field, answer the prepared matrix questionnaire to select each public transportation mode (tram, monorail, and BRT). The results according to experts’ judgments represent that monorail has the first priority, Tram has the second one, and BRT has the third one according to the considered indices like execution costs, wasting time, depreciation, pollution, operation costs, travel time, passenger satisfaction, benefit to cost ratio and traffic congestion.Keywords: Bus Rapid Transit, Costs, Monorail, Pollution, Tram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6834447 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution
Authors: Saleem Z. Ramadan
Abstract:
This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the Pth percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.
Keywords: Reliability, Accelerated life testing, Cumulative exposure model, Bayesian estimation, Progressive Type-I censoring, Weibull distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21714446 Effects of Thermal Radiation and Magnetic Field on Unsteady Stretching Permeable Sheet in Presence of Free Stream Velocity
Authors: Phool Singh, Ashok Jangid, N. S. Tomer, Deepa Sinha
Abstract:
The aim of this paper is to investigate twodimensional unsteady flow of a viscous incompressible fluid about stagnation point on permeable stretching sheet in presence of time dependent free stream velocity. Fluid is considered in the influence of transverse magnetic field in the presence of radiation effect. Rosseland approximation is use to model the radiative heat transfer. Using time-dependent stream function, partial differential equations corresponding to the momentum and energy equations are converted into non-linear ordinary differential equations. Numerical solutions of these equations are obtained by using Runge-Kutta Fehlberg method with the help of Newton-Raphson shooting technique. In the present work the effect of unsteadiness parameter, magnetic field parameter, radiation parameter, stretching parameter and the Prandtl number on flow and heat transfer characteristics have been discussed. Skin-friction coefficient and Nusselt number at the sheet are computed and discussed. The results reported in the paper are in good agreement with published work in literature by other researchers.
Keywords: Magneto hydrodynamics, stretching sheet, thermal radiation, unsteady flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22754445 Solar Radiation Time Series Prediction
Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs
Abstract:
A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.
Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27724444 Computational Modeling in Strategic Marketing
Authors: Petr Cernohorsky, Jan Voracek
Abstract:
Well-developed strategic marketing planning is the essential prerequisite for establishment of the right and unique competitive advantage. Typical market, however, is a heterogeneous and decentralized structure with natural involvement of individual or group subjectivity and irrationality. These features cannot be fully expressed with one-shot rigorous formal models based on, e.g. mathematics, statistics or empirical formulas. We present an innovative solution, extending the domain of agent based computational economics towards the concept of hybrid modeling in service provider and consumer market such as telecommunications. The behavior of the market is described by two classes of agents - consumer and service provider agents - whose internal dynamics are fundamentally different. Customers are rather free multi-state structures, adjusting behavior and preferences quickly in accordance with time and changing environment. Producers, on the contrary, are traditionally structured companies with comparable internal processes and specific managerial policies. Their business momentum is higher and immediate reaction possibilities limited. This limitation underlines importance of proper strategic planning as the main process advising managers in time whether to continue with more or less the same business or whether to consider the need for future structural changes that would ensure retention of existing customers or acquisition of new ones.Keywords: Agent-based computational economics, hybrid modeling, strategic marketing, system dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16464443 Research Regarding Resistance Characteristics of Biscuits Assortment Using Cone Penetrometer
Authors: G.–A. Constantin, G. Voicu, E.–M. Stefan, P. Tudor, G. Paraschiv, M.–G. Munteanu
Abstract:
In the activity of handling and transport of food products, the products may be subjected to mechanical stresses that may lead to their deterioration by deformation, breaking, or crushing. This is the case for biscuits, regardless of their type (gluten-free or sugary), the addition of ingredients or flour from which they are made. However, gluten-free biscuits have a higher mechanical resistance to breakage or crushing compared to easily shattered sugar biscuits (especially those for children). The paper presents the results of the experimental evaluation of the texture for four varieties of commercial biscuits, using the penetrometer equipped with needle cone at five different additional weights on the cone-rod. The assortments of biscuits tested in the laboratory were Petit Beurre, Picnic, and Maia (all three manufactured by RoStar, Romania) and Sultani diet biscuits, manufactured by Eti Burcak Sultani (Turkey, in packs of 138 g). For the four varieties of biscuits and the five additional weights (50, 77, 100, 150 and 177 g), the experimental data obtained were subjected to regression analysis in the MS Office Excel program, using Velon's relationship (h = a∙ln(t) + b). The regression curves were analysed comparatively in order to identify possible differences and to highlight the variation of the penetration depth h, in relation to the time t. Based on the penetration depth between two-time intervals (every 5 seconds), the curves of variation of the penetration speed in relation to time were then drawn. It was found that Velon's law verifies the experimental data for all assortments of biscuits and for all five additional weights. The correlation coefficient R2 had in most of the analysed cases values over 0.850. The values recorded for the penetration depth were framed, in general, within 45-55 p.u. (penetrometric units) at an additional mass of 50 g, respectively between 155-168 p.u., at an additional mass of 177 g, at Petit Beurre biscuits. For Sultani diet biscuits, the values of the penetration depth were within the limits of 32-35 p.u., at an additional weight of 50 g and between 80-114 p.u., at an additional weight of 177g. The data presented in the paper can be used by both operators on the manufacturing technology flow, as well as by the traders of these food products, in order to establish the most efficient parametric of the working regimes (when packaging and handling).
Keywords: Biscuits resistance/texture, penetration depth, penetration velocity, sharp pin penetrometer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6414442 Addressing Scalability Issues of Named Entity Recognition Using Multi-Class Support Vector Machines
Authors: Mona Soliman Habib
Abstract:
This paper explores the scalability issues associated with solving the Named Entity Recognition (NER) problem using Support Vector Machines (SVM) and high-dimensional features. The performance results of a set of experiments conducted using binary and multi-class SVM with increasing training data sizes are examined. The NER domain chosen for these experiments is the biomedical publications domain, especially selected due to its importance and inherent challenges. A simple machine learning approach is used that eliminates prior language knowledge such as part-of-speech or noun phrase tagging thereby allowing for its applicability across languages. No domain-specific knowledge is included. The accuracy measures achieved are comparable to those obtained using more complex approaches, which constitutes a motivation to investigate ways to improve the scalability of multiclass SVM in order to make the solution more practical and useable. Improving training time of multi-class SVM would make support vector machines a more viable and practical machine learning solution for real-world problems with large datasets. An initial prototype results in great improvement of the training time at the expense of memory requirements.Keywords: Named entity recognition, support vector machines, language independence, bioinformatics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16994441 Purity Monitor Studies in Medium Liquid Argon TPC
Authors: I. Badhrees
Abstract:
This paper is an attempt to describe some of the results that had been found through a journey of study in the field of particle physics. This study consists of two parts, one about the measurement of the cross section of the decay of the Z particle in two electrons, and the other deals with the measurement of the cross section of the multi-photon absorption process using a beam of Laser in the Liquid Argon Time Projection Chamber.
The first part of the paper concerns the results based on the analysis of a data sample containing 8120 ee candidates to reconstruct the mass of the Z particle for each event where each event has an ee pair with PT(e) > 20GeV, and η(e) < 2.5. Monte Carlo templates of the reconstructed Z particle were produced as a function of the Z mass scale. The distribution of the reconstructed Z mass in the data was compared to the Monte Carlo templates, where the total cross section is calculated to be equal to 1432pb.
The second part concerns the Liquid Argon Time Projection Chamber, LAr TPC, the results of the interaction of the UV Laser, Nd-YAG with λ= 266mm, with LAr and through the study of the multi-photon ionization process as a part of the R&D at Bern University. The main result of this study was the cross section of the process of the multi-photon ionization process of the LAr, σe = 1.24±0.10stat±0.30sys.10 -56cm4.
Keywords: ATLAS, CERN, KACST, LArTPC, Particle Physics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17174440 CFD Parametric Study of Mixers Performance
Authors: Mikhail Strongin
Abstract:
The mixing of two or more liquids is very common in many industrial applications from automotive to food processing. CFD simulations of these processes require comparison with test results. In many cases it is practically impossible. Therefore, comparison provides with scalable tests. So, parameterization of the problem is sufficient to capture the performance of the mixer.
However, the influence of geometrical and thermo-physical parameters on the mixing is not well understood.
In this work influence of geometrical and thermal parameters was studied. It was shown that for full developed turbulent flows (Re > 104), Pet»const and concentration of secondary fluid ~ F(r/l).
In other words, the mixing is practically independent of total flow rate and scale for a given geometry and ratio of flow rates of mixing flows. This statement was proved in present work for different geometries and mixtures such as EGR and water-urea mixture.
Present study has been shown that the best way to improve the mixing is to establish geometry with the lowest Pet number possible by intensifying the turbulence in the domain. This is achievable by using step geometry, impinging flow EGR on a wall, or EGR jets, with a strong change in the flow direction, or using swirler like flow in the domain or combination all of these factors. All of these results are applicable to any mixtures of no compressible fluids.
Keywords: CFD, mixing, fluids, parameterization, scalability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19644439 Effect of Support Distance on Damage of Drilled Thin CFRP Laminates
Authors: Jean François Chatelain, Imed Zaghbani, Gilbert Lebrun, Kaml Hasni
Abstract:
Severe damages may occur during the drilling of carbon fiber reinforced plastics (CFRP). In practice, this damage is limited by adding a backup support to the drilled parts. For some aeronautical parts with curvatures, backing up parts is a demanding process. In order to simplify the operation, this research studies the effect of using a configurable setup to support parts on the resulting quality of drilled holes. The test coupons referenced in this study are twenty four-plies unidirectional laminates made of carbon fibers and epoxy resin. Different signals were measured during the drilling process for these laminates, including the thrust force, the displacement and the acceleration. The processing of these signals demonstrated that the damage is due to the combination of two main factors: the spring-back of the thin part and the thrust force. The results found were confirmed for different feeds and speeds. When the distance between supports is increased, it is observed that the spring-back increases but the thrust force decreases. The study proves the feasibility of unsupported drilling of thin CFRP laminates without creating any observable damage.
Keywords: CFRP, Damage, Drilling, Flexible setup.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18054438 Efficient Program Slicing Algorithms for Measuring Functional Cohesion and Parallelism
Authors: Jehad Al Dallal
Abstract:
Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.
Keywords: Backward slicing, cohesion measure, forward slicing, parallelism measure, program dependence graph, program slicing, static slicing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458