Search results for: Clarke and Wright Saving Method
17250 Orbit Determination from Two Position Vectors Using Finite Difference Method
Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.
Abstract:
An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.Keywords: finite difference method, grid generation, NavIC system, orbit perturbation
Procedia PDF Downloads 8417249 Influence of Driving Speed on Bearing Capacity Measurement of Intra-Urban Roads with the Traffic Speed Deflectometer(Tsd)
Authors: Pahirangan Sivapatham, Barbara Esser, Andreas Grimmel
Abstract:
In times of limited public funds and, in particular, an increased social, environmental awareness, as well as the limited availability of construction materials, sustainable and resource-saving pavement management system, is becoming more and more important. Therefore, the knowledge about the condition of the structural substances, particularly bearing capacity and its consideration while planning the maintenance measures of the subordinate network, i.e., state and municipal roads unavoidable. According to the experience, the recommended ride speed of the Traffic Speed Deflectometer (TSD) shall be higher than 40 km/h. Holding of this speed on the intra-urban roads is nearly not possible because of intersections and traffic lights as well as the speed limit. A sufficient background of experience for the evaluation of bearing capacity measurements with TSD in the range of lower speeds is not available yet. The aim of this study is to determine the possible lowest ride speed of the TSD while the bearing capacity measurement on the intra-urban roads. The manufacturer of the TSD used in this study states that the measurements can be conducted at a ride speed of higher than 5 km/h. It is well known that with decreasing ride speed, the viscous fractions in the response of the asphalt pavement increase. This must be taken into account when evaluating the bearing capacity data. In the scope of this study, several measurements were carried out at different speeds between 10 km/h and 60 km/h on the selected intra-urban roads with Pavement-Scanner of the University of Wuppertal, which is equipped with TSD. Pavement-Scanner is able to continuously determine the deflections of asphalt roads in flowing traffic at speeds of up to 80 km/h. The raw data is then aggregated to 10 m mean values so that, as a rule, a bearing capacity characteristic value can be determined for each 10 m road section. By means of analysing of obtained test results, the quality and validity of the determined data rate subject to the riding speed of TSD have been determined. Moreover, the data and pictures of the additional measuring systems of Pavement-Scanners such as High-Speed Road Monitor, Ground Penetration Radar and front cameras can be used to determine and eliminate irregularities in the pavement, which could influence the bearing capacity.Keywords: bearing capacity measurement, traffic speed deflectometer, inter-urban roads, Pavement-Scanner, structural substance
Procedia PDF Downloads 23717248 Arithmetic Operations Based on Double Base Number Systems
Authors: K. Sanjayani, C. Saraswathy, S. Sreenivasan, S. Sudhahar, D. Suganya, K. S. Neelukumari, N. Vijayarangan
Abstract:
Double Base Number System (DBNS) is an imminent system of representing a number using two bases namely 2 and 3, which has its application in Elliptic Curve Cryptography (ECC) and Digital Signature Algorithm (DSA).The previous binary method representation included only base 2. DBNS uses an approximation algorithm namely, Greedy Algorithm. By using this algorithm, the number of digits required to represent a larger number is less when compared to the standard binary method that uses base 2 algorithms. Hence, the computational speed is increased and time being reduced. The standard binary method uses binary digits 0 and 1 to represent a number whereas the DBNS method uses binary digit 1 alone to represent any number (canonical form). The greedy algorithm uses two ways to represent the number, one is by using only the positive summands and the other is by using both positive and negative summands. In this paper, arithmetic operations are used for elliptic curve cryptography. Elliptic curve discrete logarithm problem is the foundation for most of the day to day elliptic curve cryptography. This appears to be a momentous hard slog compared to digital logarithm problem. In elliptic curve digital signature algorithm, the key generation requires 160 bit of data by usage of standard binary representation. Whereas, the number of bits required generating the key can be reduced with the help of double base number representation. In this paper, a new technique is proposed to generate key during encryption and extraction of key in decryption.Keywords: cryptography, double base number system, elliptic curve cryptography, elliptic curve digital signature algorithm
Procedia PDF Downloads 39617247 Determination and Evaluation of the Need of Land Consolidation for Nationalization Purpose with the Survey Results
Authors: Turgut Ayten, Tayfun Çay, Demet Ayten
Abstract:
In this research, nationalization method for obtaining land on the destination of Ankara-Konya High Speed Train in Turkey; Land consolidation for nationalization purpose as an alternative solution on obtaining land; a survey prepared for land owners whose lands were nationalized and institution officials who carries out the nationalization and land consolidation was applied, were investigated and the need for land consolidation for nationalization purpose is tried to be put forth. Study area is located in the Konya city- Kadınhanı district-Kolukısa and Sarikaya neighbourhood in Turkey and land consolidation results of the selected field which is on the destination of the high-speed train route were obtained. The data obtained was shared with the landowners in the research area, their choice between the nationalization method and land consolidation for nationalization method was questioned. In addition, the organization and institution officials who are accepted to used primarily by the state for obtaining land that are needed for the investments of state, and institution officials who make land consolidation were investigated on the issues of the efficiency of the methods they used and if they tried different methods.Keywords: nationalization, land consolidation, land consolidation for nationalization
Procedia PDF Downloads 32417246 Analysis of Histogram Asymmetry for Waste Recognition
Authors: Janusz Bobulski, Kamila Pasternak
Abstract:
Despite many years of effort and research, the problem of waste management is still current. So far, no fully effective waste management system has been developed. Many programs and projects improve statistics on the percentage of waste recycled every year. In these efforts, it is worth using modern Computer Vision techniques supported by artificial intelligence. In the article, we present a method of identifying plastic waste based on the asymmetry analysis of the histogram of the image containing the waste. The method is simple but effective (94%), which allows it to be implemented on devices with low computing power, in particular on microcomputers. Such de-vices will be used both at home and in waste sorting plants.Keywords: waste management, environmental protection, image processing, computer vision
Procedia PDF Downloads 11917245 The Transport of Radical Species to Single and Double Strand Breaks in the Liver’s DNA Molecule by a Hybrid Method of Type Monte Carlo - Diffusion Equation
Abstract:
The therapeutic utility of certain Auger emitters such as iodine-125 depends on their position within the cell nucleus . Or diagnostically, and to maintain as low as possible cell damage, it is preferable to have radionuclide localized outside the cell or at least the core. One solution to this problem is to consider markers capable of conveying anticancer drugs to the tumor site regardless of their location within the human body. The objective of this study is to simulate the impact of a complex such as bleomycin on single and double strand breaks in the DNA molecule. Indeed, this simulation consists of the following transactions: - Construction of BLM -Fe- DNA complex. - Simulation of the electron’s transport from the metastable state excitation of Fe 57 by the Monte Carlo method. - Treatment of chemical reactions in the considered environment by the diffusion equation. For physical, physico-chemical and finally chemical steps, the geometry of the complex is considered as a sphere of 50 nm centered on the binding site , and the mathematical method used is called step by step based on Monte Carlo codes.Keywords: concentration, yield, radical species, bleomycin, excitation, DNA
Procedia PDF Downloads 45717244 Variational Evolutionary Splines for Solving a Model of Temporomandibular Disorders
Authors: Alberto Hananel
Abstract:
The aim of this work is to modelize the occlusion of a person with temporomandibular disorders as an evolutionary equation and approach its solution by the construction and characterizing of discrete variational splines. To formulate the problem, certain boundary conditions have been considered. After showing the existence and the uniqueness of the solution of such a problem, a convergence result of a discrete variational evolutionary spline is shown. A stress analysis of the occlusion of a human jaw with temporomandibular disorders by finite elements is carried out in FreeFem++ in order to prove the validity of the presented method.Keywords: approximation, evolutionary PDE, Finite Element Method, temporomandibular disorders, variational spline
Procedia PDF Downloads 37817243 Direct Strength Method Approach for Indian Cold Formed Steel Sections with and Without Perforation for Compression Member
Authors: K. Raghu, Altafhusen P. Pinjar
Abstract:
Cold-formed steel section are extensively used in industry and many other non-industry constructions worldwide, it is relatively a new concept in India. Cold-formed steel sections have been developed as more economical building solutions to the alternative heavier hot-rolled sections in the commercial and residential markets. Cold‐formed steel (CFS) structural members are commonly manufactured with perforations to accommodate plumbing, electrical, and heating conduits in the walls and ceilings of buildings. Current design methods available to engineers for predicting the strength of CFS members with perforations are prescriptive and limited to specific perforation locations, spacing, and sizes. The Direct Strength Method (DSM), a relatively new design method for CFS members validated for members with and without perforations, predicts the ultimate strength of general CFS members with the elastic buckling properties of the member cross section. The design compression strength and flexural strength of Indian (IS 811-1987) standard sections is calculated as per North American Specification (AISI-S100 2007) and software CUFSM 4.05.Keywords: direct strength, cold formed, perforations, CUFSM
Procedia PDF Downloads 37917242 Comparison of Analytical Method and Software for Analysis of Flat Slab Subjected to Various Parametric Loadings
Authors: Hema V. Vanar, R. K. Soni, N. D. Shah
Abstract:
Slabs supported directly on columns without beams are known as Flat slabs. Flat slabs are highly versatile elements widely used in construction, providing minimum depth, fast construction and allowing flexible column grids. The main objective of this thesis is comparison of analytical method and soft ware for analysis of flat slab subjected to various parametric loadings. Study presents analysis of flat slab is performed under different types of gravity.Keywords: fat slab, parametric load, analysis, software
Procedia PDF Downloads 49317241 Enhancing the Network Security with Gray Code
Authors: Thomas Adi Purnomo Sidhi
Abstract:
Nowadays, network is an essential need in almost every part of human daily activities. People now can seamlessly connect to others through the Internet. With advanced technology, our personal data now can be more easily accessed. One of many components we are concerned for delivering the best network is a security issue. This paper is proposing a method that provides more options for security. This research aims to improve network security by focusing on the physical layer which is the first layer of the OSI model. The layer consists of the basic networking hardware transmission technologies of a network. With the use of observation method, the research produces a schematic design for enhancing the network security through the gray code converter.Keywords: network, network security, grey code, physical layer
Procedia PDF Downloads 50417240 Outcome of Unilateral Retinoblastoma: A Ten Years Experience of Children's Cancer, Hospital Egypt
Authors: Ahmed Elhussein, Hossam El-Zomor, Adel Alieldin, Mahmoud A. Afifi, Abdullah Elhusseiny, Hala Taha, Amal Refaat, Soha Ahmed, Mohamed S. Zagloul
Abstract:
Background: A majority of children with retinoblastoma (60%) have a disease in one eye only (unilateral disease). This is a retrospective study to evaluate two different treatment modalities in those patients for saving their lives and vision. Methods: Four hundred and four patients were diagnosed with unilateral intraocular retinoblastoma at Children’s Cancer, Hospital Egypt (CCHE) through the period of July/2007 until December/2017. Management strategies included primary enucleation versus ocular salvage treatment. Results: Patients presented with mean age 24.5 months with range (1.2-154.3 months). According to the international retinoblastoma classification, Group D (n=172, 42%) was the most common, followed by group E (n=142, 35%), group C (n=63, 16%), and group B (n=27, 7%). All patients were alive at the end of the study except four patients who died, with 5-years overall survival 98.3% [CI, (96.5-100%)]. Patients presented with advanced disease and poor visual prognosis (n=241, 59.6%) underwent primary enucleation with 6 cycles adjuvant chemotherapy if they had high-risk features in the enucleated eye; only four patients out of 241 ended-up either with extraocular metastasis (n=3) or death (n=1). While systemic chemotherapy and focal therapy were the primary treatment for those who presented with favorable disease status and good visual prognosis (n=163, 40.4%); seventy-seven patients of them (47%) ended up with a pre-defined event (enucleation, EBRT, off protocol chemotherapy or 2ry malignancy). Ocular survival for patients received primary chemotherapy + focal therapy was [50.9% (CI, 43.5-59.6%)] at 3 years and [46.9% (CI,39.3-56%)] at 5 years. Comparison between upfront enucleation and primary chemotherapy for occurrence of extraocular metastasis revealed that there was no statistical difference between them except in group D (p value). While for occurrence of death, no statistical difference in all classification groups. Conclusion: In retinoblastoma, primary chemotherapy is a reasonable option and has a good probability for ocular salvage without increasing the risk of metastasis in comparison to upfront enucleation except in group D.Keywords: CCHE, chemotherapy, enucleation, retinoblastoma
Procedia PDF Downloads 15517239 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance
Authors: Rajinder Singh, Ram Valluru
Abstract:
Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility
Procedia PDF Downloads 13117238 STC Parameters versus Real Time Measured Parameters to Determine Cost Effectiveness of PV Panels
Authors: V. E. Selaule, R. M. Schoeman H. C. Z. Pienaar
Abstract:
Research has shown that solar energy is a renewable energy resource with the most potential when compared to other renewable energy resources in South Africa. There are many makes of Photovoltaic (PV) panels on the market and it is difficult to assess which to use. PV panel manufacturers use Standard Test Conditions (STC) to rate their PV panels. STC conditions are different from the actual operating environmental conditions were the PV panels are used. This paper describes a practical method to determine the most cost effective available PV panel. The method shows that PV panel manufacturer STC ratings cannot be used to select a cost effective PV panel.Keywords: PV orientation, PV panel, PV STC, Solar energy
Procedia PDF Downloads 47217237 Maintaining User-Level Security in Short Message Service
Authors: T. Arudchelvam, W. W. E. N. Fernando
Abstract:
Mobile phone has become as an essential thing in our life. Therefore, security is the most important thing to be considered in mobile communication. Short message service is the cheapest way of communication via the mobile phones. Therefore, security is very important in the short message service as well. This paper presents a method to maintain the security at user level. Different types of encryption methods are used to implement the user level security in mobile phones. Caesar cipher, Rail Fence, Vigenere cipher and RSA are used as encryption methods in this work. Caesar cipher and the Rail Fence methods are enhanced and implemented. The beauty in this work is that the user can select the encryption method and the key. Therefore, by changing the encryption method and the key time to time, the user can ensure the security of messages. By this work, while users can safely send/receive messages, they can save their information from unauthorised and unwanted people in their own mobile phone as well.Keywords: SMS, user level security, encryption, decryption, short message service, mobile communication
Procedia PDF Downloads 39617236 Factors Associated with Weight Loss Maintenance after an Intervention Program
Authors: Filipa Cortez, Vanessa Pereira
Abstract:
Introduction: The main challenge of obesity treatment is long-term weight loss maintenance. The 3 phases method is a weight loss program that combines a low carb and moderately high-protein diet, food supplements and a weekly one-to-one consultation with a certified nutritionist. Sustained weight control is the ultimate goal of phase 3. Success criterion was the minimum loss of 10% of initial weight and its maintenance after 12 months. Objective: The aim of this study was to identify factors associated with successful weight loss maintenance after 12 months at the end of 3 phases method. Methods: The study included 199 subjects that achieved their weight loss goal (phase 3). Weight and body mass index (BMI) were obtained at the baseline and every week until the end of the program. Therapeutic adherence was measured weekly on a Likert scale from 1 to 5. Subjects were considered in compliance with nutritional recommendation and supplementation when their classification was ≥ 4. After 12 months of the method, the current weight and number of previous weight-loss attempts were collected by telephone interview. The statistical significance was assumed at p-values < 0.05. Statistical analyses were performed using SPSS TM software v.21. Results: 65.3% of subjects met the success criterion. The factors which displayed a significant weight loss maintenance prediction were: greater initial percentage weight loss (OR=1.44) during the weight loss intervention and a higher number of consultations in phase 3 (OR=1.10). Conclusion: These findings suggest that the percentage weight loss during the weight loss intervention and the number of consultations in phase 3 may facilitate maintenance of weight loss after the 3 phases method.Keywords: obesity, weight maintenance, low-carbohydrate diet, dietary supplements
Procedia PDF Downloads 15017235 Use of Quasi-3D Inversion of VES Data Based on Lateral Constraints to Characterize the Aquifer and Mining Sites of an Area Located in the North-East of Figuil, North Cameroon
Authors: Fofie Kokea Ariane Darolle, Gouet Daniel Hervé, Koumetio Fidèle, Yemele David
Abstract:
The electrical resistivity method is successfully used in this paper in order to have a clearer picture of the subsurface of the North-East ofFiguil in northern Cameroon. It is worth noting that this method is most often used when the objective of the study is to image the shallow subsoils by considering them as a set of stratified ground layers. The problem to be solved is very often environmental, and in this case, it is necessary to perform an inversion of the data in order to have a complete and accurate picture of the parameters of the said layers. In the case of this work, thirty-three (33) Schlumberger VES have been carried out on an irregular grid to investigate the subsurface of the study area. The 1D inversion applied as a preliminary modeling tool and in correlation with the mechanical drillings results indicates a complex subsurface lithology distribution mainly consisting of marbles and schists. Moreover, the quasi-3D inversion with lateral constraint shows that the misfit between the observed field data and the model response is quite good and acceptable with a value low than 10%. The method also reveals existence of two water bearing in the considered area. The first is the schist or weathering aquifer (unsuitable), and the other is the marble or the fracturing aquifer (suitable). The final quasi 3D inversion results and geological models indicate proper sites for groundwaters prospecting and for mining exploitation, thus allowing the economic development of the study area.Keywords: electrical resistivity method, 1D inversion, quasi 3D inversion, groundwaters, mining
Procedia PDF Downloads 15517234 Laban Movement Analysis Using Kinect
Authors: Bernstein Ran, Shafir Tal, Tsachor Rachelle, Studd Karen, Schuster Assaf
Abstract:
Laban Movement Analysis (LMA), developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Many applications that use motion capture data might be significantly leveraged if the Laban qualities will be recognized automatically. This paper presents an automated recognition method of Laban qualities from motion capture skeletal recordings and it is demonstrated on the output of Microsoft’s Kinect V2 sensor.Keywords: Laban movement analysis, multitask learning, Kinect sensor, machine learning
Procedia PDF Downloads 34117233 Potential of Mineral Composition Reconstruction for Monitoring the Performance of an Iron Ore Concentration Plant
Authors: Maryam Sadeghi, Claude Bazin, Daniel Hodouin, Laura Perez Barnuevo
Abstract:
The performance of a separation process is usually evaluated using performance indices calculated from elemental assays readily available from the chemical analysis laboratory. However, the separation process performance is essentially related to the properties of the minerals that carry the elements and not those of the elements. Since elements or metals can be carried by valuable and gangue minerals in the ore and that each mineral responds differently to a mineral processing method, the use of only elemental assays could lead to erroneous or uncertain conclusions on the process performance. This paper discusses the advantages of using performance indices calculated from minerals content, such as minerals recovery, for process performance assessments. A method is presented that uses elemental assays to estimate the minerals content of the solids in various process streams. The method combines the stoichiometric composition of the minerals and constraints of mass conservation for the minerals through the concentration process to estimate the minerals content from elemental assays. The advantage of assessing a concentration process using mineral based performance indices is illustrated for an iron ore concentration circuit.Keywords: data reconciliation, iron ore concentration, mineral composition, process performance assessment
Procedia PDF Downloads 21817232 Three-Dimensional Computer Graphical Demonstration of Calcified Tissue and Its Clinical Significance
Authors: Itsuo Yokoyama, Rikako Kikuti, Miti Sekikawa, Tosinori Asai, Sarai Tsuyoshi
Abstract:
Introduction: Vascular access for hemodialysis therapy is often difficult, even for experienced medical personnel. Ultrasound guided needle placement have been performed occasionally but is not always helpful in certain cases with complicated vascular anatomy. Obtaining precise anatomical knowledge of the vascular structure is important to prevent access-related complications. With augmented reality (AR) device such as AR glasses, the virtual vascular structure is shown superimposed on the actual patient vessels, thus enabling the operator to maneuver catheter placement easily with free both hands. We herein report our method of AR guided vascular access method in dialysis treatment Methods: Three dimensional (3D) object of the arm with arteriovenous fistula is computer graphically created with 3D software from the data obtained by computer tomography, ultrasound echogram, and image scanner. The 3D vascular object thus created is viewed on the screen of the AR digital display device (such as AR glass or iPad). The picture of the vascular anatomical structure becomes visible, which is superimposed over the real patient’s arm, thereby the needle insertion be performed under the guidance of AR visualization with ease. By this method, technical difficulty in catheter placement for dialysis can be lessened and performed safely. Considerations: Virtual reality technology has been applied in various fields and medical use is not an exception. Yet AR devices have not been widely used among medical professions. Visualization of the virtual vascular object can be achieved by creation of accurate three dimensional object with the help of computer graphical technique. Although our experience is limited, this method is applicable with relative easiness and our accumulating evidence has suggested that our method of vascular access with the use of AR can be promising.Keywords: abdominal-aorta, calcification, extraskeletal, dialysis, computer graphics, 3DCG, CT, calcium, phosphorus
Procedia PDF Downloads 16317231 Intrusion Detection System Using Linear Discriminant Analysis
Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou
Abstract:
Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99
Procedia PDF Downloads 22617230 The Application of Dynamic Network Process to Environment Planning Support Systems
Authors: Wann-Ming Wey
Abstract:
In recent years, in addition to face the external threats such as energy shortages and climate change, traffic congestion and environmental pollution have become anxious problems for many cities. Considering private automobile-oriented urban development had produced many negative environmental and social impacts, the transit-oriented development (TOD) has been considered as a sustainable urban model. TOD encourages public transport combined with friendly walking and cycling environment designs, however, non-motorized modes help improving human health, energy saving, and reducing carbon emissions. Due to environmental changes often affect the planners’ decision-making; this research applies dynamic network process (DNP) which includes the time dependent concept to promoting friendly walking and cycling environmental designs as an advanced planning support system for environment improvements. This research aims to discuss what kinds of design strategies can improve a friendly walking and cycling environment under TOD. First of all, we collate and analyze environment designing factors by reviewing the relevant literatures as well as divide into three aspects of “safety”, “convenience”, and “amenity” from fifteen environment designing factors. Furthermore, we utilize fuzzy Delphi Technique (FDT) expert questionnaire to filter out the more important designing criteria for the study case. Finally, we utilized DNP expert questionnaire to obtain the weights changes at different time points for each design criterion. Based on the changing trends of each criterion weight, we are able to develop appropriate designing strategies as the reference for planners to allocate resources in a dynamic environment. In order to illustrate the approach we propose in this research, Taipei city as one example has been used as an empirical study, and the results are in depth analyzed to explain the application of our proposed approach.Keywords: environment planning support systems, walking and cycling, transit-oriented development (TOD), dynamic network process (DNP)
Procedia PDF Downloads 34417229 Dynamic Test for Sway-Mode Buckling of Columns
Authors: Boris Blostotsky, Elia Efraim
Abstract:
Testing of columns in sway mode is performed in order to determine the maximal allowable load limited by plastic deformations or their end connections and a critical load limited by columns stability. Motivation to determine accurate value of critical force is caused by its using as follow: - critical load is maximal allowable load for given column configuration and can be used as criterion of perfection; - it is used in calculation prescribed by standards for design of structural elements under combined action of compression and bending; - it is used for verification of theoretical analysis of stability at various end conditions of columns. In the present work a new non-destructive method for determination of columns critical buckling load in sway mode is proposed. The method allows performing measurements during the tests under loads that exceeds the columns critical load without losing its stability. The possibility of such loading is achieved by structure of the loading system. The system is performed as frame with rigid girder, one of the columns is the tested column and the other is additional two-hinged strut. Loading of the frame is carried out by the flexible traction element attached to the girder. The load applied on the tested column can achieve a values that exceed the critical load by choice of parameters of the traction element and the additional strut. The system lateral stiffness and the column critical load are obtained by the dynamic method. The experiment planning and the comparison between the experimental and theoretical values were performed based on the developed dependency of lateral stiffness of the system on vertical load, taking into account a semi-rigid connections of the column's ends. The agreement between the obtained results was established. The method can be used for testing of real full-size columns in industrial conditions.Keywords: buckling, columns, dynamic method, semi-rigid connections, sway mode
Procedia PDF Downloads 31317228 RBS Characteristic of Cd1−xZnxS Thin Film Fabricated by Vacuum Deposition Method
Authors: N. Dahbi, D. E. Arafah
Abstract:
Cd1−xZnxS thins films have been fabricated from ZnS/CdS/ZnS multilayer thin film systems, by using the vacuum deposition method; the Rutherford back-scattering (RBS) technique have been applied in order to determine the: structure, composition, depth profile, and stoichiometric of these films. The influence of the chemical and heat treatments on the produced films also have been investigated; the RBS spectra of the films showed that homogenous Cd1−xZnxS can be synthesized with x=0.45.Keywords: Cd1−xZnxS, chemical treatment, depth profile, heat treatment, RBS, RUMP simulation, thin film, vacuum deposition, ZnS/CdS/ZnS
Procedia PDF Downloads 22117227 Simultaneous Determination of Cefazolin and Cefotaxime in Urine by HPLC
Authors: Rafika Bibi, Khaled Khaladi, Hind Mokran, Mohamed Salah Boukhechem
Abstract:
A high performance liquid chromatographic method with ultraviolet detection at 264nm was developed and validate for quantitative determination and separation of cefazolin and cefotaxime in urine, the mobile phase consisted of acetonitrile and phosphate buffer pH4,2(15 :85) (v/v) pumped through ODB 250× 4,6 mm, 5um column at a flow rate of 1ml/min, loop of 20ul. In this condition, the validation of this technique showed that it is linear in a range of 0,01 to 10ug/ml with a good correlation coefficient ( R>0,9997), retention time of cefotaxime, cefazolin was 9.0, 10.1 respectively, the statistical evaluation of the method was examined by means of within day (n=6) and day to day (n=5) and was found to be satisfactory with high accuracy and precision.Keywords: cefazolin, cefotaxime, HPLC, bioscience, biochemistry, pharmaceutical
Procedia PDF Downloads 36317226 Verification of the Necessity of Maintenance Anesthesia with Isoflurane after Induction with Tiletamine-Zolazepam in Dogs Using the Dixon's up-and-down Method
Authors: Sonia Lachowska, Agnieszka Antonczyk, Joanna Tunikowska, Pawel Kucharski, Bartlomiej Liszka
Abstract:
Isoflurane is one of the most commonly used anaesthetic gases in veterinary medicine. Due to its numerous side effects, intravenous anaesthesia is more often used. The combination of tiletamine with zolazepam has proved to be a safe and pharmacologically beneficial combination. Analgesic effect, fast induction time, effective myorelaxation, and smooth recovery are the main advantages of this combination of drugs. In the following study, the authors verified the necessity of isoflurane to maintain anaesthesia in dogs after the use of tiletamine-zolazepam for induction. 12 dogs were selected to the group with the inclusion criteria: ASA (American Society of Anaesthesiology) I or II. Each dog received premedication intramuscularly with medetomidine-butorfanol (10 μg/kg, 0,1 mg/kg respectively). 15 minutes from premedication, preoxygenation lasting 5 minutes was started. Anaesthesia was induced with tiletamine-zolazepam at the dose of 5 mg/kg. Then the dogs were intubated and anaesthesia was maintained with isoflurane. Initially, MAC (Minimum Alveolar Concentration) was set to 0.7 vol.%. After 15 minutes equilibration, MAC was determined using Dixon’s up-and-down method. Painful stimulation including compressions of paw pad, phalange, groin area, and clamping Backhaus on skin. Hemodynamic and ventilation parameters were measured and noted in 2 minutes intervals. In this method, the positive or negative response to the noxious stimulus is estimated and then used to determine the concentration of isoflurane for next patient. The response is only assessed once in each patient. The results show that isoflurane is not necessary to maintain anaesthesia after tiletamine-zolazepam induction. This is clinically important because the side effects resulting from using isoflurane are eliminated.Keywords: anaesthesia, dog, Isoflurane, The Dixon's up-and-down method, Tiletamine, Zolazepam
Procedia PDF Downloads 18317225 Sol-Gel Derived ZnO Nanostructures: Optical Properties
Authors: Sheo K. Mishra, Rajneesh K. Srivastava, R. K. Shukla
Abstract:
In the present work, we report on the optical properties including UV-vis absorption and photoluminescence (PL) of ZnO nanostructures synthesized by sol-gel method. Structural and morphological investigations have been performed by X-ray diffraction method (XRD) and scanning electron microscopy (SEM). The XRD result confirms the formation of hexagonal wurtzite phase of ZnO nanostructures. The presence of various diffraction peaks suggests polycrystalline nature. The XRD pattern exhibits no additional peak due to by-products such as Zn(OH)2. The average crystallite size of prepared ZnO sample corresponding to the maximum intensity peaks is to be ~38.22 nm. The SEM micrograph shows different nanostructures of pure ZnO. Photoluminescence (PL) spectrum shows several emission peaks around 353 nm, 382 nm, 419 nm, 441 nm, 483 nm and 522 nm. The obtained results suggest that the prepared phosphors are quite suitable for optoelectronic applications.Keywords: ZnO, sol-gel, XRD, PL
Procedia PDF Downloads 40017224 Prediction and Analysis of Human Transmembrane Transporter Proteins Based on SCM
Authors: Hui-Ling Huang, Tamara Vasylenko, Phasit Charoenkwan, Shih-Hsiang Chiu, Shinn-Ying Ho
Abstract:
The knowledge of the human transporters is still limited due to technically demanding procedure of crystallization for the structural characterization of transporters by spectroscopic methods. It is desirable to develop bioinformatics tools for effective analysis of available sequences in order to identify human transmembrane transporter proteins (HMTPs). This study proposes a scoring card method (SCM) based method for predicting HMTPs. We estimated a set of propensity scores of dipeptides to be HMTPs using SCM from the training dataset (HTS732) consisting of 366 HMTPs and 366 non-HMTPs. SCM using the estimated propensity scores of 20 amino acids and 400 dipeptides -as HMTPs, has a training accuracy of 87.63% and a test accuracy of 66.46%. The five top-ranked dipeptides include LD, NV, LI, KY, and MN with scores 996, 992, 989, 987, and 985, respectively. Five amino acids with the highest propensity scores are Ile, Phe, Met, Gly, and Leu, that hydrophobic residues are mostly highly-scored. Furthermore, obtained propensity scores were used to analyze physicochemical properties of human transporters.Keywords: dipeptide composition, physicochemical property, human transmembrane transporter proteins, human transmembrane transporters binding propensity, scoring card method
Procedia PDF Downloads 36917223 Frequency Identification of Wiener-Hammerstein Systems
Authors: Brouri Adil, Giri Fouad
Abstract:
The problem of identifying Wiener-Hammerstein systems is addressed in the presence of two linear subsystems of structure totally unknown. Presently, the nonlinear element is allowed to be noninvertible. The system identification problem is dealt by developing a two-stage frequency identification method such a set of points of the nonlinearity are estimated first. Then, the frequency gains of the two linear subsystems are determined at a number of frequencies. The method involves Fourier series decomposition and only requires periodic excitation signals. All involved estimators are shown to be consistent.Keywords: Wiener-Hammerstein systems, Fourier series expansions, frequency identification, automation science
Procedia PDF Downloads 53617222 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 3417221 Identification of Coauthors in Scientific Database
Authors: Thiago M. R Dias, Gray F. Moita
Abstract:
The analysis of scientific collaboration networks has contributed significantly to improving the understanding of how does the process of collaboration between researchers and also to understand how the evolution of scientific production of researchers or research groups occurs. However, the identification of collaborations in large scientific databases is not a trivial task given the high computational cost of the methods commonly used. This paper proposes a method for identifying collaboration in large data base of curriculum researchers. The proposed method has low computational cost with satisfactory results, proving to be an interesting alternative for the modeling and characterization of large scientific collaboration networks.Keywords: extraction, data integration, information retrieval, scientific collaboration
Procedia PDF Downloads 396