Search results for: Software Product Line
2551 The Sizes of Large Hierarchical Long-Range Percolation Clusters
Authors: Yilun Shang
Abstract:
We study a long-range percolation model in the hierarchical lattice ΩN of order N where probability of connection between two nodes separated by distance k is of the form min{αβ−k, 1}, α ≥ 0 and β > 0. The parameter α is the percolation parameter, while β describes the long-range nature of the model. The ΩN is an example of so called ultrametric space, which has remarkable qualitative difference between Euclidean-type lattices. In this paper, we characterize the sizes of large clusters for this model along the line of some prior work. The proof involves a stationary embedding of ΩN into Z. The phase diagram of this long-range percolation is well understood.Keywords: percolation, component, hierarchical lattice, phase transition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12722550 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.
Keywords: Calibration, dynamic range, radiometric resolution, SNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13402549 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools
Abstract:
Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.
Keywords: Block matching, digital evidence, hash list.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13582548 Estimation of Crustal Thickness within the Sokoto Basin North-Western Nigeria Using Bouguer Gravity Anomaly Data
Authors: T. T. Olugbenga, A. I. Augie
Abstract:
This research proposes an interpretation of the Bouguer’ gravity anomaly data of some parts of Sokoto basin for the estimation of crustal thickness. The study area is bounded between latitudes 1100′0″N and 1300′0″N, and longitudes 400′0″E and 600′0″E that covered Koko, Jega, B/Kebbi, Argungu, Lema, Bodinga, Tamgaza, Gunmi,Daki Takwas, Dange, Sokoto, Ilella, T/Mafara, Anka, Maru, Gusau, K/Namoda, and Sabon Birni within Sokoto, Kebbi and Zamfara state respectively. The established map of the study area was digitized in X, Y and Z format using excel software package and the digitized data were processed using Surfer version 13 software. The Moho and Conrad depths based on a relationship between Bouguer’ gravity anomaly determined crustal thickness were estimated as 35 to 37 km and 19 to 21 km, respectively. The crustal region has been categorized into: Crustal thinning zone that is the region with high gravity anomaly value due to its greater geothermal energy and also Crustal thickening zone which the region with low anomaly values due to its lower geothermal energy. Birnin kebbi, Jega, Sokoto were identified as the region of hydrocarbon potential with an estimate of 35 km thickness within the crustal region which is referred to as crustal thickening as a result of its low but sufficient geothermal energy to decompose organic matter within the region to form hydrocarbons.
Keywords: Bouguer gravity anomaly, crustal thickness, geothermal energy, hydrocarbons, Moho and Conrad Depths.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6542547 Cultural Anxiety and Its Impact on Students- Life: A Case Study of International Students in Wuhan University
Authors: Nadeem Akhtar, Shan Bo
Abstract:
This article illustrates that how non similar culture become a cause of constant anxiety among international students in China. For that, a survey was carried out among international students of Wuhan University, China. The association among non similar culture, non familiarity of Chinese culture, self finance students and food problem is looked at through a regression line, and in the light of empirical results, a model is anticipated which elucidates these results. Some suggestions were directed at the end which will help to mitigate the anxiety among prospective students in Chinese universities.
Keywords: Anxiety, international students, non similar culture, Wuhan University
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19412546 Evaluation of Seismic Behavior of Steel Shear Wall with Opening with Hardener and Beam with Reduced Cross Section under Cycle Loading with Finite Element Analysis Method
Authors: Masoud Mahdavi
Abstract:
During an earthquake, the structure is subjected to seismic loads that cause tension in the members of the building. The use of energy dissipation elements in the structure reduces the percentage of seismic forces on the main members of the building (especially the columns). Steel plate shear wall, as one of the most widely used types of energy dissipation element, has evolved today, and regular drilling of its inner plate is one of the common cases. In the present study, using a finite element method, the shear wall of the steel plate is designed as a floor (with dimensions of 447 × 6/246 cm) with Abacus software and in three different modes on which a cyclic load has been applied. The steel shear wall has a horizontal element (beam) with a reduced beam section (RBS). The hole in the interior plate of the models is created in such a way that it has the process of increasing the area, which makes the effect of increasing the surface area of the hole on the seismic performance of the steel shear wall completely clear. In the end, it was found that with increasing the opening level in the steel shear wall (with reduced cross-section beam), total displacement and plastic strain indicators increased, structural capacity and total energy indicators decreased and the Mises Monson stress index did not change much.
Keywords: Steel plate shear wall with opening, cyclic loading, reduced cross-section beam, finite element method, Abaqus Software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6302545 Valorization of Waste Dates in South Algeria: Biofuel Production
Authors: Insaf Mehani, Bachir Bouchekima
Abstract:
In Algeria, the conditioning units of dates, generate significant quantities of waste arising from sorting deviations. This biomass, until then considered as a waste with high impact on the environment can be transformed into high value added product. It is possible to develop common dates of low commercial value, and put on the local and international market a new generation of products with high added values such as bio ethanol. Besides its use in chemical synthesis, bio ethanol can be blended with gasoline to produce a clean fuel while improving the octane.Keywords: Bioenergy, dates, bioethanol, valorisation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14642544 eTax Filing and Service Quality: The Case of the Revenue Online Service
Authors: Regina Connolly, Frank Bannister
Abstract:
This paper describes an ongoing study into the quality of service provided by the Irish Revenue Commisioners- online tax filing and collection system. The Irish Revenue On-Line Service (ROS) site has won several awards. In this study, a version of the widely use SERVQUAL measuring instrument, adapted for use with online services, has been modified for the specific case of ROS. In this paper, the theory behind this instrument is set out, the particular problems of evaluating revenue collecting online are examined and the rationale for this approach is explained.
Keywords: E-service quality, revenue online system, online tax filing system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25472543 Evaluating Emission Reduction Due to a Proposed Light Rail Service: A Micro-Level Analysis
Authors: Saeid Eshghi, Neeraj Saxena, Abdulmajeed Alsultan
Abstract:
Carbon dioxide (CO2) alongside other gas emissions in the atmosphere cause a greenhouse effect, resulting in an increase of the average temperature of the planet. Transportation vehicles are among the main contributors of CO2 emission. Stationary vehicles with initiated motors produce more emissions than mobile ones. Intersections with traffic lights that force the vehicles to become stationary for a period of time produce more CO2 pollution than other parts of the road. This paper focuses on analyzing the CO2 produced by the traffic flow at Anzac Parade Road - Barker Street intersection in Sydney, Australia, before and after the implementation of Light rail transport (LRT). The data are gathered during the construction phase of the LRT by collecting the number of vehicles on each path of the intersection for 15 minutes during the evening rush hour of 1 week (6-7 pm, July 04-31, 2018) and then multiplied by 4 to calculate the flow of vehicles in 1 hour. For analyzing the data, the microscopic simulation software “VISSIM” has been used. Through the analysis, the traffic flow was processed in three stages: before and after implementation of light rail train, and one during the construction phase. Finally, the traffic results were input into another software called “EnViVer”, to calculate the amount of CO2 during 1 h. The results showed that after the implementation of the light rail, CO2 will drop by a minimum of 13%. This finding provides an evidence that light rail is a sustainable mode of transport.Keywords: Carbon dioxide, emission modeling, light rail, microscopic model, traffic flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9462542 Towards Innovation Performance among University Staff
Authors: C. S. Quah, S. P. L. Sim
Abstract:
This study examined how individuals in their respective teams contributed to innovation performance besides defining the term of innovation in their own respective views. This study also identified factors that motivated University staff to contribute to the innovation products. In addition, it examined whether there is a significant relationship between professional training level and the length of service among university staff towards innovation and to what extent do the two variables contributed towards innovative products. The significance of this study is that it revealed the strengths and weaknesses of the university staff when contributing to innovation performance. Stratified-random sampling was employed to determine the samples representing the population of lecturers in the study, involving 123 lecturers in one of the local universities in Malaysia. The method employed to analyze the data is through categorizing into themes for the open-ended questions besides using descriptive and inferential statistics for the quantitative data. This study revealed that two types of definition for the term “innovation” exist among the university staff, namely, creation of new product or new approach to do things as well as value-added creative way to upgrade or improve existing process and service to be more efficient. This study found that the most prominent factor that propels them towards innovation is to improve the product in order to benefit users, followed by selfsatisfaction and recognition. This implies that the staff in the organization viewed the creation of innovative products as a process of growth to fulfill the needs of others and also to realize their personal potential. This study also found that there was only a significant relationship between the professional training level and the length of service of 4 - 6 years among the university staff. The rest of the groups based on the length of service showed that there was no significant relationship with the professional training level towards innovation. Moreover, results of the study on directional measures depicted that the relationship for the length of service of 4- 6 years with professional training level among the university staff is quite weak. This implies that good organization management lies on the shoulders of the key leaders who enlighten the path to be followed by the staff.
Keywords: Innovation, length of service, performance, professional training level, motivation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15712541 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test
Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman
Abstract:
At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. This finding needs to be confirmed with a greater number of stations across other Australian states.
Keywords: Floods, FLIKE, probability distributions, flood frequency, outlier.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33112540 Voltage Stability Assessment and Enhancement Using STATCOM - A Case Study
Authors: Puneet Chawla, Balwinder Singh
Abstract:
Recently, increased attention has been devoted to the voltage instability phenomenon in power systems. Many techniques have been proposed in the literature for evaluating and predicting voltage stability using steady state analysis methods. In this paper P-V and Q-V curves have been generated for a 57 bus Patiala Rajpura circle of India. The power-flow program is developed in MATLAB using Newton Raphson method. Using Q-V curves the weakest bus of the power system and the maximum reactive power change permissible on that bus is calculated. STATCOMs are placed on the weakest bus to improve the voltage and hence voltage stability and also the power transmission capability of the line.
Keywords: Voltage stability, Reactive power, power flow, weakest bus, STATCOM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30262539 Modeling the Hybrid Battery/Super-Storage System for a Solar Standalone Microgrid
Authors: Astiaj Khoramshahi, Hossein Ahmadi Danesh Ashtiani, Ahmad Khoshgard, Hamidreza Damghani, Leila Damghani
Abstract:
Solar energy systems using various storages are required to be evaluated based on energy requirements and applications. Also, modeling and analysis of storage systems are necessary to increase the effectiveness of combinations of these systems. In this paper, analysis based on the MATLAB software has been analyzed to evaluate the response of the hybrid energy system considering various technologies of renewable energy and energy storage. In the present study, three different simulation scenarios are presented. Simulation output results using software for the first scenario show that the battery is effective in smoothing the overall power demand to the consumer studied during a day, but temporary loads on the grid with high frequencies, effectively cannot be canceled due to the limited response speed of battery control. Simulation outputs for the second scenario using the energy storage system show that sudden changes in demand power are paved by super saving. The majority of these sudden changes in power demand are caused by sewing consumers and receiving variable solar power (due to clouds passing through the solar array). Simulation outputs for the third scenario show the effects of the hybrid system for the same consumer and the output of the solar array, leading to the smallest amount of power demand fed into the grid, as well as demand at peak times. According to the "battery only" scenario, the displacement technique of the peak load has been significantly reduced.
Keywords: Storage system, super storage, standalone, microgrid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3352538 Cryptanalysis of Chang-Chang-s EC-PAKA Protocol for Wireless Mobile Networks
Authors: Hae-Soon Ahn, Eun-Jun Yoon
Abstract:
With the rapid development of wireless mobile communication, applications for mobile devices must focus on network security. In 2008, Chang-Chang proposed security improvements on the Lu et al.-s elliptic curve authentication key agreement protocol for wireless mobile networks. However, this paper shows that Chang- Chang-s improved protocol is still vulnerable to off-line password guessing attacks unlike their claims.
Keywords: Authentication, key agreement, wireless mobile networks, elliptic curve, password guessing attacks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15112537 Theoretical Modal Analysis of Freely and Simply Supported RC Slabs
Authors: M. S. Ahmed, F. A. Mohammad
Abstract:
This paper focuses on the dynamic behavior of reinforced concrete (RC) slabs. Therefore, the theoretical modal analysis was performed using two different types of boundary conditions. Modal analysis method is the most important dynamic analyses. The analysis would be modal case when there is no external force on the structure. By using this method in this paper, the effects of freely and simply supported boundary conditions on the frequencies and mode shapes of RC square slabs are studied. ANSYS software was employed to derive the finite element model to determine the natural frequencies and mode shapes of the slabs. Then, the obtained results through numerical analysis (finite element analysis) would be compared with the exact solution. The main goal of the research study is to predict how the boundary conditions change the behavior of the slab structures prior to performing experimental modal analysis. Based on the results, it is concluded that simply support boundary condition has obvious influence to increase the natural frequencies and change the shape of the mode when it is compared with freely supported boundary condition of slabs. This means that such support conditions have the direct influence on the dynamic behavior of the slabs. Thus, it is suggested to use free-free boundary condition in experimental modal analysis to precisely reflect the properties of the structure. By using free-free boundary conditions, the influence of poorly defined supports is interrupted.
Keywords: Natural frequencies, Mode shapes, Modal analysis, ANSYS software, RC slabs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38212536 On the Need to have an Additional Methodology for the Psychological Product Measurement and Evaluation
Authors: Corneliu Sofronie, Roxana Zubcov
Abstract:
Cognitive Science appeared about 40 years ago, subsequent to the challenge of the Artificial Intelligence, as common territory for several scientific disciplines such as: IT, mathematics, psychology, neurology, philosophy, sociology, and linguistics. The new born science was justified by the complexity of the problems related to the human knowledge on one hand, and on the other by the fact that none of the above mentioned sciences could explain alone the mental phenomena. Based on the data supplied by the experimental sciences such as psychology or neurology, models of the human mind operation are built in the cognition science. These models are implemented in computer programs and/or electronic circuits (specific to the artificial intelligence) – cognitive systems – whose competences and performances are compared to the human ones, leading to the psychology and neurology data reinterpretation, respectively to the construction of new models. During these processes if psychology provides the experimental basis, philosophy and mathematics provides the abstraction level utterly necessary for the intermission of the mentioned sciences. The ongoing general problematic of the cognitive approach provides two important types of approach: the computational one, starting from the idea that the mental phenomenon can be reduced to 1 and 0 type calculus operations, and the connection one that considers the thinking products as being a result of the interaction between all the composing (included) systems. In the field of psychology measurements in the computational register use classical inquiries and psychometrical tests, generally based on calculus methods. Deeming things from both sides that are representing the cognitive science, we can notice a gap in psychological product measurement possibilities, regarded from the connectionist perspective, that requires the unitary understanding of the quality – quantity whole. In such approach measurement by calculus proves to be inefficient. Our researches, deployed for longer than 20 years, lead to the conclusion that measuring by forms properly fits to the connectionism laws and principles.Keywords: complementary methodology, connection approach, networks without scaling, quantum psychology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36712535 Electrical Field Around the Overhead Transmission Lines
Authors: S.S. Razavipour, M. Jahangiri, H. Sadeghipoor
Abstract:
In this paper, the computation of the electrical field distribution around AC high-voltage lines is demonstrated. The advantages and disadvantages of two different methods are described to evaluate the electrical field quantity. The first method is a seminumerical method using the laws of electrostatic techniques to simulate the two-dimensional electric field under the high-voltage overhead line. The second method which will be discussed is the finite element method (FEM) using specific boundary conditions to compute the two- dimensional electric field distributions in an efficient way.
Keywords: Electrical field, unloaded transmission lines, finite element method, electrostatic images technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83692534 Production Process for Diesel Fuel Components Polyoxymethylene Dimethyl Ethers from Methanol and Formaldehyde Solution
Authors: Xiangjun Li, Huaiyuan Tian, Wujie Zhang, Dianhua Liu
Abstract:
Polyoxymethylene dimethyl ethers (PODEn) as clean diesel additive can improve the combustion efficiency and quality of diesel fuel and alleviate the problem of atmospheric pollution. Considering synthetic routes, PODE production from methanol and formaldehyde is regarded as the most economical and promising synthetic route. However, methanol used for synthesizing PODE can produce water, which causes the loss of active center of catalyst and hydrolysis of PODEn in the production process. Macroporous strong acidic cation exchange resin catalyst was prepared, which has comparative advantages over other common solid acid catalysts in terms of stability and catalytic efficiency for synthesizing PODE. Catalytic reactions were carried out under 353 K, 1 MPa and 3mL·gcat-1·h-1 in a fixed bed reactor. Methanol conversion and PODE3-6 selectivity reached 49.91% and 23.43%, respectively. Catalyst lifetime evaluation showed that resin catalyst retained its catalytic activity for 20 days without significant changes and catalytic activity of completely deactivated resin catalyst can basically return to previous level by simple acid regeneration. The acid exchange capacities of original and deactivated catalyst were 2.5191 and 0.0979 mmol·g-1, respectively, while regenerated catalyst reached 2.0430 mmol·g-1, indicating that the main reason for resin catalyst deactivation is that Brønsted acid sites of original resin catalyst were temporarily replaced by non-hydrogen ion cations. A separation process consisting of extraction and distillation for PODE3-6 product was designed for separation of water and unreacted formaldehyde from reactive mixture and purification of PODE3-6, respectively. The concentration of PODE3-6 in final product can reach up to 97%. These results indicate that the scale-up production of PODE3-6 from methanol and formaldehyde solution is feasible.
Keywords: Inactivation, polyoxymethylene dimethyl ethers, separation process, sulfonic cation exchange resin.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9032533 Optimal Assessment of Faulted Area around an Industrial Customer for Critical Sag Magnitudes
Authors: Marios N. Moschakis
Abstract:
This paper deals with the assessment of faulted area around an industrial customer connected to a particular electric grid that will cause a certain sag magnitude on this customer. The faulted (critical or exposed) area’s length is calculated by adding all line lengths in the neighborhood of the critical node (customer). The applied method is the so-called Method of Critical Distances. By using advanced short-circuit analysis, the Critical Area can be accurately calculated for radial and meshed power networks due to all symmetrical and asymmetrical faults. For the demonstration of the effectiveness of the proposed methodology, a study case is used.
Keywords: Critical area, fault-induced voltage sags, industrial customers, power quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16462532 On Frenet-Serret Invariants of Non-Null Curves in Lorentzian Space L5
Authors: Melih Turgut, José Luis López-Bonilla, Süha Yılmaz
Abstract:
The aim of this paper is to determine Frenet-Serret invariants of non-null curves in Lorentzian 5-space. First, we define a vector product of four vectors, by this way, we present a method to calculate Frenet-Serret invariants of the non-null curves. Additionally, an algebraic example of presented method is illustrated.
Keywords: Lorentzian 5-space, Frenet-Serret Invariants, Nonnull Curves
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15332531 The DAQ Debugger for iFDAQ of the COMPASS Experiment
Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius
Abstract:
In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.Keywords: DAQ debugger, data acquisition system, FPGA, system signals, Qt framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8932530 A Programming Assessment Software Artefact Enhanced with the Help of Learners
Authors: Romeo A. Botes, Imelda Smit
Abstract:
The demands of an ever changing and complex higher education environment, along with the profile of modern learners challenge current approaches to assessment and feedback. More learners enter the education system every year. The younger generation expects immediate feedback. At the same time, feedback should be meaningful. The assessment of practical activities in programming poses a particular problem, since both lecturers and learners in the information and computer science discipline acknowledge that paper-based assessment for programming subjects lacks meaningful real-life testing. At the same time, feedback lacks promptness, consistency, comprehensiveness and individualisation. Most of these aspects may be addressed by modern, technology-assisted assessment. The focus of this paper is the continuous development of an artefact that is used to assist the lecturer in the assessment and feedback of practical programming activities in a senior database programming class. The artefact was developed using three Design Science Research cycles. The first implementation allowed one programming activity submission per assessment intervention. This pilot provided valuable insight into the obstacles regarding the implementation of this type of assessment tool. A second implementation improved the initial version to allow multiple programming activity submissions per assessment. The focus of this version is on providing scaffold feedback to the learner – allowing improvement with each subsequent submission. It also has a built-in capability to provide the lecturer with information regarding the key problem areas of each assessment intervention.
Keywords: Programming, computer-aided assessment, technology-assisted assessment, programming assessment software, design science research, mixed-method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9942529 Characterizations of Star-Shaped, L-Convex, and Convex Polygons
Authors: Thomas Shermer, Godfried T. Toussaint
Abstract:
A chord of a simple polygon P is a line segment [xy] that intersects the boundary of P only at both endpoints x and y. A chord of P is called an interior chord provided the interior of [xy] lies in the interior of P. P is weakly visible from [xy] if for every point v in P there exists a point w in [xy] such that [vw] lies in P. In this paper star-shaped, L-convex, and convex polygons are characterized in terms of weak visibility properties from internal chords and starshaped subsets of P. A new Krasnoselskii-type characterization of isothetic star-shaped polygons is also presented.Keywords: Convex polygons, L-convex polygons, star-shaped polygons, chords, weak visibility, discrete and computational geometry
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23432528 Design of a Tuning Fork type UWB Patch Antenna
Authors: A. H. M. Zahirul Alam, Rafiqul Islam, Sheroz Khan
Abstract:
In this paper a tuning fork type structure of Ultra Wideband (UWB) antenna is proposed. The antenna offers excellent performance for UWB system, ranging from 3.7 GHz to 13.8 GHz. The antenna exhibits a 10 dB return loss bandwidth over the entire frequency band. The rectangular patch antenna is designed on FR4 substrate and fed with 50 ohms microstrip line by optimizing the width of partial ground, the width and position of the feedline to operate in UWB. The rectangular patch is then modified to tuning fork structure by maintaining UWB frequency range.Keywords: Ultra wide band, antenna, microstrip, partial groundplane.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19742527 Kalman Filter for Bilinear Systems with Application
Authors: Abdullah E. Al-Mazrooei
Abstract:
In this paper, we present a new kind of the bilinear systems in the form of state space model. The evolution of this system depends on the product of state vector by its self. The well known Lotak Volterra and Lorenz models are special cases of this new model. We also present here a generalization of Kalman filter which is suitable to work with the new bilinear model. An application to real measurements is introduced to illustrate the efficiency of the proposed algorithm.
Keywords: Bilinear systems, state space model, Kalman filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19722526 Cellulose Extraction from Pomelo Peel: Synthesis of Carboxymethyl Cellulose
Authors: J. Chumee, D. Seeburin
Abstract:
The cellulose was extracted from pomelo peel and an etherification reaction used for converting cellulose to carboxymethyl cellulose (CMC). The pomelo peel was refluxed with 0.5 M HCl and 1 M NaOH solution at 90°C for 1 h and 2 h, respectively. The cellulose was bleached with calcium hypochlorite and used as precursor. The precursor was soaked in mixed solution between isopropyl alcohol and 40%w/v NaOH for 12 h. After that, chloroacetic acid was added and reacted at 55°C for 6 h. The optimum condition was 5 g of cellulose: 0.25 mole of NaOH : 0.07 mole of ClCH2COOH with 78.00% of yield. Moreover, the product had 0.54 of degree of substitution (DS).
Keywords: Pomelo peel, Carboxymethyl cellulose, Cellulose.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43092525 Organization of the Purchasing Function for Innovation
Authors: Jasna Prester, Ivana Rašić Bakarić, Božidar Matijević
Abstract:
Innovations not only contribute to competitiveness of the company but have also positive effects on revenues. On average, product innovations account to 14 percent of companies’ sales. Innovation management has substantially changed during the last decade, because of growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Proper organization of the purchasing function is important since for the majority of manufacturing companies deal with substantial material costs which pass through the purchasing function. In the past the purchasing function was largely seen as a transaction-oriented, clerical function but today purchasing is the intermediate with supply chain partners contributing to innovations, be it product or process innovations. Therefore, purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work. Specifically, technological risks which deal with complexity of the products, and processes will be investigated more thoroughly. Buying components or such high edge technologies necessities careful investigation of technical features and therefore is usually conducted by a team of experts. Therefore it is hypothesized that higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Main contribution of this research lies is in the fact that analysis was performed on a large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey. Most analyses of purchasing function are done by case study analysis of innovative firms. Therefore this study contributes with empirical evaluations that can be generalized.
Keywords: Purchasing function organization, innovation, technological risk, GMRG 4 survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37222524 A Review on Cloud Computing and Internet of Things
Authors: Sahar S. Tabrizi, Dogan Ibrahim
Abstract:
Cloud Computing is a convenient model for on-demand networks that uses shared pools of virtual configurable computing resources, such as servers, networks, storage devices, applications, etc. The cloud serves as an environment for companies and organizations to use infrastructure resources without making any purchases and they can access such resources wherever and whenever they need. Cloud computing is useful to overcome a number of problems in various Information Technology (IT) domains such as Geographical Information Systems (GIS), Scientific Research, e-Governance Systems, Decision Support Systems, ERP, Web Application Development, Mobile Technology, etc. Companies can use Cloud Computing services to store large amounts of data that can be accessed from anywhere on Earth and also at any time. Such services are rented by the client companies where the actual rent depends upon the amount of data stored on the cloud and also the amount of processing power used in a given time period. The resources offered by the cloud service companies are flexible in the sense that the user companies can increase or decrease their storage requirements or the processing power requirements at any time, thus minimizing the overall rental cost of the service they receive. In addition, the Cloud Computing service providers offer fast processors and applications software that can be shared by their clients. This is especially important for small companies with limited budgets which cannot afford to purchase their own expensive hardware and software. This paper is an overview of the Cloud Computing, giving its types, principles, advantages, and disadvantages. In addition, the paper gives some example engineering applications of Cloud Computing and makes suggestions for possible future applications in the field of engineering.
Keywords: Cloud computing, cloud services, IaaS, PaaS, SaaS, IoT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13912523 WAF: an Interface Web Agent Framework
Authors: Xizhi Li, Qinming He
Abstract:
A trend in agent community or enterprises is that they are shifting from closed to open architectures composed of a large number of autonomous agents. One of its implications could be that interface agent framework is getting more important in multi-agent system (MAS); so that systems constructed for different application domains could share a common understanding in human computer interface (HCI) methods, as well as human-agent and agent-agent interfaces. However, interface agent framework usually receives less attention than other aspects of MAS. In this paper, we will propose an interface web agent framework which is based on our former project called WAF and a Distributed HCI template. A group of new functionalities and implications will be discussed, such as web agent presentation, off-line agent reference, reconfigurable activation map of agents, etc. Their enabling techniques and current standards (e.g. existing ontological framework) are also suggested and shown by examples from our own implementation in WAF.Keywords: HCI, Interface agent, MAS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16572522 Using Tabu Search to Analyze the Mauritian Economic Sectors
Authors: J. Cheeneebash, V. Beeharry, A. Gopaul
Abstract:
The aim of this paper is to express the input-output matrix as a linear ordering problem which is classified as an NP-hard problem. We then use a Tabu search algorithm to find the best permutation among sectors in the input-output matrix that will give an optimal solution. This optimal permutation can be useful in designing policies and strategies for economists and government in their goal of maximizing the gross domestic product.Keywords: Input-Output matrix, linear ordering problem, Tabusearch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1494