Search results for: double nearest proportion feature extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5545

Search results for: double nearest proportion feature extraction

4585 Filling the Gap of Extraction of Digital Evidence from Emerging Platforms Without Forensics Tools

Authors: Yi Anson Lam, Siu Ming Yiu, Kam Pui Chow

Abstract:

Digital evidence has been tendering to courts at an exponential rate in recent years. As an industrial practice, most digital evidence is extracted and preserved using specialized and well-accepted forensics tools. On the other hand, the advancement in technologies enables the creation of quite a few emerging platforms such as Telegram, Signal etc. Existing (well-accepted) forensics tools were not designed to extract evidence from these emerging platforms. While new forensics tools require a significant amount of time and effort to be developed and verified, this paper tries to address how to fill this gap using quick-fix alternative methods for digital evidence collection (e.g., based on APIs provided by Apps) and discuss issues related to the admissibility of this evidence to courts with support from international courts’ stance and the circumstances of accepting digital evidence using these proposed alternatives.

Keywords: extraction, digital evidence, laws, investigation

Procedia PDF Downloads 62
4584 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis

Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen

Abstract:

Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.

Keywords: hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection

Procedia PDF Downloads 302
4583 Structure Analysis of Text-Image Connection in Jalayrid Period Illustrated Manuscripts

Authors: Mahsa Khani Oushani

Abstract:

Text and image are two important elements in the field of Iranian art, the text component and the image component have always been manifested together. The image narrates the text and the text is the factor in the formation of the image and they are closely related to each other. The connection between text and image is an interactive and two-way connection in the tradition of Iranian manuscript arrangement. The interaction between the narrative description and the image scene is the result of a direct and close connection between the text and the image, which in addition to the decorative aspect, also has a descriptive aspect. In this article the connection between the text element and the image element and its adaptation to the theory of Roland Barthes, the structuralism theorist, in this regard will be discussed. This study tends to investigate the question of how the connection between text and image in illustrated manuscripts of the Jalayrid period is defined according to Barthes’ theory. And what kind of proportion has the artist created in the composition between text and image. Based on the results of reviewing the data of this study, it can be inferred that in the Jalayrid period, the image has a reference connection and although it is of major importance on the page, it also maintains a close connection with the text and is placed in a special proportion. It is not necessarily balanced and symmetrical and sometimes uses imbalance for composition. This research has been done by descriptive-analytical method, which has been done by library collection method.

Keywords: structure, text, image, Jalayrid, painter

Procedia PDF Downloads 227
4582 Role of Zinc in Catch-Up Growth of Low-Birth Weight Neonates

Authors: M. A. Abdel-Wahed, Nayera Elmorsi Hassan, Safaa Shafik Imam, Ola G. El-Farghali, Khadija M. Alian

Abstract:

Low-birth-weight is a challenging public health problem. Aim: to clarify role of zinc on enhancing catch-up growth of low-birth-weight and find out a proposed relationship between zinc effect on growth and the main growth hormone mediator, IGF-1. Methods: Study is a double-blind-randomized-placebo-controlled trial conducted on low-birth-weight-neonates delivered at Ain Shams University Maternity Hospital. It comprised 200 Low-birth-weight-neonates selected from those admitted to NICU. Neonates were randomly allocated into one of the following two groups: group I: low-birth-weight; AGA or SGA on oral zinc therapy at dose of 10 mg/day; group II: Low-birth-weight; AGA or SGA on placebo. Anthropometric measurements were taken including birth weight, length; head, waist, chest, mid-upper arm circumferences, triceps and sub-scapular skin-fold thicknesses. Results: At 12-month-old follow-up visit, mean weight, length; head (HC), waist, chest, mid-upper arm circumferences and triceps; also, infant’s proportions had values ≥ 10th percentile for weight, length and HC were significantly higher among infants of group I when compared to those of group II. Oral zinc therapy was associated with 24.88%, 25.98% and 19.6% higher proportion of values ≥ 10th percentile regarding weight, length and HC at 12-month-old visit, respectively [NNT = 4, 4 and 5, respectively]. Median IGF-1 levels measured at 6 months were significantly higher in group I compared to group II (median (range): 90 (19 – 130) ng/ml vs. 74 (21 – 130) ng/ml, respectively, p=0.023). Conclusion: Oral zinc therapy in low-birth-weight neonates was associated with significantly more catch-up growth at 12-months-old and significantly higher serum IGF-1 at 6-month-old.

Keywords: low-birth-weight, zinc, catch-up growth, neonates

Procedia PDF Downloads 412
4581 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 76
4580 A Technique for Image Segmentation Using K-Means Clustering Classification

Authors: Sadia Basar, Naila Habib, Awais Adnan

Abstract:

The paper presents the Technique for Image Segmentation Using K-Means Clustering Classification. The presented algorithms were specific, however, missed the neighboring information and required high-speed computerized machines to run the segmentation algorithms. Clustering is the process of partitioning a group of data points into a small number of clusters. The proposed method is content-aware and feature extraction method which is able to run on low-end computerized machines, simple algorithm, required low-quality streaming, efficient and used for security purpose. It has the capability to highlight the boundary and the object. At first, the user enters the data in the representation of the input. Then in the next step, the digital image is converted into groups clusters. Clusters are divided into many regions. The same categories with same features of clusters are assembled within a group and different clusters are placed in other groups. Finally, the clusters are combined with respect to similar features and then represented in the form of segments. The clustered image depicts the clear representation of the digital image in order to highlight the regions and boundaries of the image. At last, the final image is presented in the form of segments. All colors of the image are separated in clusters.

Keywords: clustering, image segmentation, K-means function, local and global minimum, region

Procedia PDF Downloads 369
4579 Photoluminescence in Cerium Doped Fluorides Prepared by Slow Precipitation Method

Authors: Aarti Muley, S. J. Dhoblae

Abstract:

CaF₂ and BaF₂ doped with cerium were prepared by slow precipitation method with different molar concentration and different cerium concentration. Both the samples were also prepared by direct method for comparison. The XRD of BaF₂:Ce shows that it crystallizes to BCC structure. The peak matches with JCPDS file no. 4-0452. Also, The XRD pattern of CaF₂:Ce matches well with the JCPDS file number 75- 0363 and crystallized to BCC phase. In CaF₂, the double-humped photoluminescence spectra were observed at 320nm and 340nm when the sample was prepared by the direct precipitation method, and the ratio between these peaks is unity. However when the sample prepared by slow precipitation method the double-humped emission spectra of CaF₂:Ce was observed at 323nm and 340nm. The ratio between these peaks is 0.58, and the optimum concentration is obtained for 0.1 molar CaF₂ with Ce concentration 1.5%. When the cerium concentration is increased by 2% the peak at 323nm vanishes, and the emission was observed at 342nm with the shoulder at 360nm. In this case, the intensity reduces drastically. The excitation is observed at 305nm with a small peak at 254nm. One molar BaF₂ doped with 0.1% of cerium was synthesized by direct precipitation method gives double humped spectra at 308nm and 320nm, when it is prepared with slow precipitation method with the cerium concentration 0.05m%, 0.1m%, 0.15m%, 0.2m% the broad emission is observed around 325nm with the shoulder at 350nm. The excitation spectra are narrow and observed at 290nm. As the percentage of cerium is increased further again shift is observed. The emission spectra were observed at 360nm with a small peak at 330nm. The phenomenon of shifting of emission spectra at low concentration of cerium can directly relate with the particle size and reported for nanomaterials also.

Keywords: calcium fluoride, barium fluoride, photoluminescence, slow precipitation method

Procedia PDF Downloads 103
4578 Effect of Ethanol Concentration and Enzyme Pre-Treatment on Bioactive Compounds from Ginger Extract

Authors: S. Lekhavat, T. Kajsongkram, S. Sang-han

Abstract:

Dried ginger was extracted and investigated the effect of ethanol concentration and enzyme pre-treatment on its bioactive compounds in solvent extraction process. Sliced fresh gingers were dried by oven dryer at 70 °C for 24 hours and ground to powder using grinder which their size were controlled by passing through a 20-mesh sieve. In enzyme pre-treatment process, ginger powder was sprayed with 1 % (w/w) cellulase and then was incubated at 45 °C for 2 hours following by extraction process using ethanol at concentration of 0, 20, 40, 60 and 80 % (v/v), respectively. The ratio of ginger powder and ethanol are 1:9 and extracting conditions were controlled at 80 °C for 2 hours. Bioactive compounds extracted from ginger, either enzyme-treated or non enzyme-treated samples, such as total phenolic content (TPC), 6-Gingerol (6 G), 6-Shogaols (6 S) and antioxidant activity (IC50 using DPPH assay), were examined. Regardless of enzyme treatment, the results showed that 60 % ethanol provided the highest TPC (20.36 GAE mg /g. dried ginger), 6G (0.77%), 6S (0.036 %) and the lowest IC50 (625 μg/ml) compared to other ratios of ethanol. Considering the effect of enzyme on bioactive compounds and antioxidant activity, it was found that enzyme-treated sample has more 6G (0.17-0.77 %) and 6S (0.020-0.036 %) than non enzyme-treated samples (0.13-0.77 % 6G, 0.015-0.036 % 6S). However, the results showed that non enzyme-treated extracts provided higher TPC (6.76-20.36 GAE mg /g. dried ginger) and Lowest IC50 (625-1494 μg/ml ) than enzyme-treated extracts (TPC 5.36-17.50 GAE mg /g. dried ginger, IC50 793-2146 μg/ml).

Keywords: antioxidant activity, enzyme, extraction, ginger

Procedia PDF Downloads 251
4577 Effect of Al Addition on Microstructure and Properties of NbTiZrCrAl Refractory High Entropy Alloys

Authors: Xiping Guo, Fanglin Ge, Ping Guan

Abstract:

Refractory high entropy alloys are alternative materials expected to be employed at high temperatures. The comprehensive changes of microstructure and properties of NbTiZrCrAl refractory high entropy alloys are systematically studied by adjusting Al content. Five kinds of button alloy ingots with different contents of Al in NbTiZrCrAlX (X=0, 0.2, 0.5, 0.75, 1.0) were prepared by vacuum non-consumable arc melting technology. The microstructure analysis results show that the five alloys are composed of BCC solid solution phase rich in Nb and Ti and Laves phase rich in Cr, Zr, and Al. The addition of Al changes the structure from hypoeutectic to hypereutectic, increases the proportion of Laves phase, and changes the structure from cubic C15 to hexagonal C14. The hardness and fracture toughness of the five alloys were tested at room temperature, and the compressive mechanical properties were tested at 1000℃. The results showed that the addition of Al increased the proportion of Laves phase and decreased the proportion of the BCC phase, thus increasing the hardness and decreasing the fracture toughness at room temperature. However, at 1000℃, the strength of 0.5Al and 0.75Al alloys whose composition is close to the eutectic point is the best, which indicates that the eutectic structure is of great significance for the improvement of high temperature strength of NbTiZrCrAl refractory high entropy alloys. The five alloys were oxidized for 1 h and 20 h in static air at 1000℃. The results show that only the oxide film of 0Al alloy falls off after oxidizing for 1 h at 1000℃. After 20h, the oxide film of all the alloys fell off, but the oxide film of alloys containing Al was more dense and complete. By producing protective oxide Al₂O₃, inhibiting the preferential oxidation of Zr, promoting the preferential oxidation of Ti, and combination of Cr₂O₃ and Nb₂O₅ to form CrNbO₄, Al significantly improves the high temperature oxidation resistance of NbTiZrCrAl refractory high entropy alloys.

Keywords: NbTiZrCrAl, refractory high entropy alloy, al content, microstructural evolution, room temperature mechanical properties, high temperature compressive strength, oxidation resistance

Procedia PDF Downloads 82
4576 Analysis of the Gait Characteristics of Soldier between the Normal and Loaded Gait

Authors: Ji-il Park, Min Kyu Yu, Jong-woo Lee, Sam-hyeon Yoo

Abstract:

The purpose of this research is to analyze the gait strategy between the normal and loaded gait. To this end, five male participants satisfied two conditions: the normal and loaded gait (backpack load 25.2 kg). As expected, results showed that additional loads elicited not a proportional increase in vertical and shear ground reaction force (GRF) parameters but also increase of the impulse, momentum and mechanical work. However, in case of the loaded gait, the time duration of the double support phase was increased unexpectedly. It is because the double support phase which is more stable than the single support phase can reduce instability of the loaded gait. Also, the directions of the pre-collision and after-collision were moved upward and downward compared to the normal gait. As a result, regardless of the additional backpack load, the impulse-momentum diagram during the step-to-step transition was maintained such as the normal gait. It means that human walk efficiently to keep stability and minimize total net works in case of the loaded gait.

Keywords: normal gait, loaded gait, impulse, collision, gait analysis, mechanical work, backpack load

Procedia PDF Downloads 285
4575 Optimization and Vibration Suppression of Double Tuned Inertial Mass Damper of Damped System

Authors: Chaozhi Yang, Xinzhong Chen, Guoqing Huang

Abstract:

Inerter is a two-terminal inertial element that can produce apparent mass far larger than its physical mass. A double tuned inertial mass damper (DTIMD) is developed by combining a spring with an inerter and a dashpot in series to replace the viscous damper of a tuned mass damper (TMD), and its performance is investigated. Firstly, the DTIMD is optimized numerically with H∞ and H2 methods considering the system’s damping based on the single-degree-of-freedom (SDOF)-DTIMD system, and the optimal structural parameters are obtained. Then, compared with a TMD, the control effect of the DTIMD with the optimal structural parameters on wind-induced vibration of a wind turbine in downwind direction under the shutdown condition is studied. The results demonstrate that the vibration suppression of the DTIMD is superior than that of a TMD at the same mass ratio. And at the identical vibration suppression, the tuned mass of the DTIMD can be reduced by up to 40% compared with a TMD.

Keywords: wind-induced vibration, vibration control, inerter, tuned mass damper, damped system

Procedia PDF Downloads 156
4574 Noninvasive Brain-Machine Interface to Control Both Mecha TE Robotic Hands Using Emotiv EEG Neuroheadset

Authors: Adrienne Kline, Jaydip Desai

Abstract:

Electroencephalogram (EEG) is a noninvasive technique that registers signals originating from the firing of neurons in the brain. The Emotiv EEG Neuroheadset is a consumer product comprised of 14 EEG channels and was used to record the reactions of the neurons within the brain to two forms of stimuli in 10 participants. These stimuli consisted of auditory and visual formats that provided directions of ‘right’ or ‘left.’ Participants were instructed to raise their right or left arm in accordance with the instruction given. A scenario in OpenViBE was generated to both stimulate the participants while recording their data. In OpenViBE, the Graz Motor BCI Stimulator algorithm was configured to govern the duration and number of visual stimuli. Utilizing EEGLAB under the cross platform MATLAB®, the electrodes most stimulated during the study were defined. Data outputs from EEGLAB were analyzed using IBM SPSS Statistics® Version 20. This aided in determining the electrodes to use in the development of a brain-machine interface (BMI) using real-time EEG signals from the Emotiv EEG Neuroheadset. Signal processing and feature extraction were accomplished via the Simulink® signal processing toolbox. An Arduino™ Duemilanove microcontroller was used to link the Emotiv EEG Neuroheadset and the right and left Mecha TE™ Hands.

Keywords: brain-machine interface, EEGLAB, emotiv EEG neuroheadset, OpenViBE, simulink

Procedia PDF Downloads 493
4573 From User's Requirements to UML Class Diagram

Authors: Zeineb Ben Azzouz, Wahiba Ben Abdessalem Karaa

Abstract:

The automated extraction of UML class diagram from natural language requirements is a highly challenging task. Many approaches, frameworks and tools have been presented in this field. Nonetheless, the experiments of these tools have shown that there is no approach that can work best all the time. In this context, we propose a new accurate approach to facilitate the automatic mapping from textual requirements to UML class diagram. Our new approach integrates the best properties of statistical Natural Language Processing (NLP) techniques to reduce ambiguity when analysing natural language requirements text. In addition, our approach follows the best practices defined by conceptual modelling experts to determine some patterns indispensable for the extraction of basic elements and concepts of the class diagram. Once the relevant information of class diagram is captured, a XMI document is generated and imported with a CASE tool to build the corresponding UML class diagram.

Keywords: class diagram, user’s requirements, XMI, software engineering

Procedia PDF Downloads 466
4572 Strengthening Strategy across Languages: A Cognitive and Grammatical Universal Phenomenon

Authors: Behnam Jay

Abstract:

In this study, the phenomenon called “Strengthening” in human language refers to the strategic use of multiple linguistic elements to intensify specific grammatical or semantic functions. This study explores cross-linguistic evidence demonstrating how strengthening appears in various grammatical structures. In French and Spanish, double negatives are used not to cancel each other out but to intensify the negation, challenging the conventional understanding that double negatives result in an affirmation. For example, in French, il ne sait pas (He dosn't know.) uses both “ne” and “pas” to strengthen the negation. Similarly, in Spanish, No vio a nadie. (He didn't see anyone.) uses “no” and “nadie” to achieve a stronger negative meaning. In Japanese, double honorifics, often perceived as erroneous, are reinterpreted as intentional efforts to amplify politeness, as seen in forms like ossharareru (to say, (honorific)). Typically, an honorific morpheme appears only once in a predicate, but native speakers often use double forms to reinforce politeness. In Turkish, the word eğer (indicating a condition) is sometimes used together with the conditional suffix -se(sa) within the same sentence to strengthen the conditional meaning, as in Eğer yağmur yağarsa, o gelmez. (If it rains, he won't come). Furthermore, the combination of question words with rising intonation in various languages serves to enhance interrogative force. These instances suggest that strengthening is a cross-linguistic strategy that may reflect a broader cognitive mechanism in language processing. This paper investigates these cases in detail, providing insights into why languages may adopt such strategies. No corpus was used for collecting examples from different languages. Instead, the examples were gathered from languages the author encountered during their research, focusing on specific grammatical and morphological phenomena relevant to the concept of strengthening. Due to the complexity of employing a comparative method across multiple languages, this approach was chosen to illustrate common patterns of strengthening based on available data. It is acknowledged that different languages may have different strengthening strategies in various linguistic domains. While the primary focus is on grammar and morphology, it is recognized that the strengthening phenomenon may also appear in phonology. Future research should aim to include a broader range of languages and utilize more comprehensive comparative methods where feasible to enhance methodological rigor and explore this phenomenon more thoroughly.

Keywords: strengthening, cross-linguistic analysis, syntax, semantics, cognitive mechanism

Procedia PDF Downloads 10
4571 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application

Authors: Ramesh P., Aby Joseph

Abstract:

Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.

Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling

Procedia PDF Downloads 128
4570 Assessment of the Biological Nitrogen Fixation in Soybean Sown in Different Types of Moroccan Soils

Authors: F. Z. Aliyat, B. Ben Messaoud, L. Nassiri, E. Bouiamrine, J. Ibijbijen

Abstract:

The present study aims to assess the biological nitrogen fixation in the soybean tested in different Moroccan soils combined with the rhizobial inoculation. These effects were evaluated by the plant growth mainly by the aerial biomass production, total nitrogen content and the proportion of the nitrogen fixed. This assessment clearly shows that the inoculation with bacteria increases the growth of soybean. Five different soils and a control (peat) were used. The rhizobial inoculation was performed by applying the peat that contained a mixture of 2 strains Sinorhizobium fredii HH103 and Bradyrhizobium. The biomass, the total nitrogen content and the proportion of nitrogen fixed were evaluated under different treatments. The essay was realized at the greenhouse the Faculty of Sciences, Moulay Ismail University. The soybean has shown a great response for the parameters assessed. Moreover, the best response was reported by the inoculated plants compared to non- inoculated and to the absolute control. Finally, good production and the best biological nitrogen fixation present an important ecological technology to improve the sustainable production of soybean and to ensure the increase of the fertility of soils.

Keywords: biological nitrogen fixation, inoculation, rhizobium, soybean

Procedia PDF Downloads 167
4569 Observer-Based Control Design for Double Integrators Systems with Long Sampling Periods and Actuator Uncertainty

Authors: Tomas Menard

Abstract:

The design of control-law for engineering systems has been investigated for many decades. While many results are concerned with continuous systems with continuous output, nowadays, many controlled systems have to transmit their output measurements through network, hence making it discrete-time. But it is well known that the sampling of a system whose control-law is based on the continuous output may render the system unstable, especially when this sampling period is long compared to the system dynamics. The control design then has to be adapted in order to cope with this issue. In this paper, we consider systems which can be modeled as double integrator with uncertainty on the input since many mechanical systems can be put under such form. We present a control scheme based on an observer using only discrete time measurement and which provides continuous time estimation of the state, combined with a continuous control law, which stabilized a system with second-order dynamics even in the presence of uncertainty. It is further shown that arbitrarily long sampling periods can be dealt with properly setting the control scheme parameters.

Keywords: dynamical system, control law design, sampled output, observer design

Procedia PDF Downloads 182
4568 Dual-Rail Logic Unit in Double Pass Transistor Logic

Authors: Hamdi Belgacem, Fradi Aymen

Abstract:

In this paper we present a low power, low cost differential logic unit (LU). The proposed LU receives dual-rail inputs and generates dual-rail outputs. The proposed circuit can be used in Arithmetic and Logic Units (ALU) of processor. It can be also dedicated for self-checking applications based on dual duplication code. Four logic functions as well as their inverses are implemented within a single Logic Unit. The hardware overhead for the implementation of the proposed LU is lower than the hardware overhead required for standard LU implemented with standard CMOS logic style. This new implementation is attractive as fewer transistors are required to implement important logic functions. The proposed differential logic unit can perform 8 Boolean logical operations by using only 16 transistors. Spice simulations using a 32 nm technology was utilized to evaluate the performance of the proposed circuit and to prove its acceptable electrical behaviour.

Keywords: differential logic unit, double pass transistor logic, low power CMOS design, low cost CMOS design

Procedia PDF Downloads 447
4567 Analysis of Vocal Fold Vibrations from High-Speed Digital Images Based on Dynamic Time Warping

Authors: A. I. A. Rahman, Sh-Hussain Salleh, K. Ahmad, K. Anuar

Abstract:

Analysis of vocal fold vibration is essential for understanding the mechanism of voice production and for improving clinical assessment of voice disorders. This paper presents a Dynamic Time Warping (DTW) based approach to analyze and objectively classify vocal fold vibration patterns. The proposed technique was designed and implemented on a Glottal Area Waveform (GAW) extracted from high-speed laryngeal images by delineating the glottal edges for each image frame. Feature extraction from the GAW was performed using Linear Predictive Coding (LPC). Several types of voice reference templates from simulations of clear, breathy, fry, pressed and hyperfunctional voice productions were used. The patterns of the reference templates were first verified using the analytical signal generated through Hilbert transformation of the GAW. Samples from normal speakers’ voice recordings were then used to evaluate and test the effectiveness of this approach. The classification of the voice patterns using the technique of LPC and DTW gave the accuracy of 81%.

Keywords: dynamic time warping, glottal area waveform, linear predictive coding, high-speed laryngeal images, Hilbert transform

Procedia PDF Downloads 237
4566 First-Principles Calculations and Thermo-Calc Study of the Elastic and Thermodynamic Properties of Ti-Nb-ZR-Ta Alloy for Biomedical Applications

Authors: M. Madigoe, R. Modiba

Abstract:

High alloyed beta (β) phase-stabilized titanium alloys are known to have a low elastic modulus comparable to that of the human bone (≈30 GPa). The β phase in titanium alloys exhibits an elastic Young’s modulus of about 60-80 GPa, which is nearly half that of α-phase (100-120 GPa). In this work, a theoretical investigation of structural stability and thermodynamic stability, as well as the elastic properties of a quaternary Ti-Nb-Ta-Zr alloy, will be presented with an attempt to lower Young’s modulus. The structural stability and elastic properties of the alloy were evaluated using the first-principles approach within the density functional theory (DFT) framework implemented in the CASTEP code. The elastic properties include bulk modulus B, elastic Young’s modulus E, shear modulus cʹ and Poisson’s ratio v. Thermodynamic stability, as well as the fraction of β phase in the alloy, was evaluated using the Thermo-Calc software package. Thermodynamic properties such as Gibbs free energy (Δ?⁰?) and enthalpy of formation will be presented in addition to phase proportion diagrams. The stoichiometric compositions of the alloy is Ti-Nbx-Ta5-Zr5 (x = 5, 10, 20, 30, 40 at.%). An optimum alloy composition must satisfy the Born stability criteria and also possess low elastic Young’s modulus. In addition, the alloy must be thermodynamically stable, i.e., Δ?⁰? < 0.

Keywords: elastic modulus, phase proportion diagram, thermo-calc, titanium alloys

Procedia PDF Downloads 179
4565 Magnetic Nano-Composite of Self-Doped Polyaniline Nanofibers for Magnetic Dispersive Micro Solid Phase Extraction Applications

Authors: Hatem I. Mokhtar, Randa A. Abd-El-Salam, Ghada M. Hadad

Abstract:

An improved nano-composite of self-doped polyaniline nanofibers and silica-coated magnetite nanoparticles were prepared and evaluated for suitability to magnetic dispersive micro solid-phase extraction. The work focused on optimization of the composite capacity to extract four fluoroquinolones (FQs) antibiotics, ciprofloxacin, enrofloxacin, danofloxacin, and difloxacin from water and improvement of composite stability towards acid and atmospheric degradation. Self-doped polyaniline nanofibers were prepared by oxidative co-polymerization of aniline with anthranilic acid. Magnetite nanopariticles were prepared by alkaline co-precipitation and coated with silica by silicate hydrolysis on magnetite nanoparticles surface at pH 6.5. The composite was formed by self-assembly by mixing self-doped polyaniline nanofibers with silica-coated magnetite nanoparticles dispersions in ethanol. The composite structure was confirmed by transmission electron microscopy (TEM). Self-doped polyaniline nanofibers and magnetite chemical structures were confirmed by FT-IR while silica coating of the magnetite was confirmed by Energy Dispersion X-ray Spectroscopy (EDS). Improved stability of the composite magnetic component was evidenced by resistance to degrade in 2N HCl solution. The adsorption capacity of self-doped polyaniline nanofibers based composite was higher than previously reported corresponding composite prepared from polyaniline nanofibers instead of self-doped polyaniline nanofibers. Adsorption-pH profile for the studied FQs on the prepared composite revealed that the best pH for adsorption was in range of 6.5 to 7. Best extraction recovery values were obtained at pH 7 using phosphate buffer. The best solvent for FQs desorption was found to be 0.1N HCl in methanol:water (8:2; v/v) mixture. 20 mL of Spiked water sample with studied FQs were preconcentrated using 4.8 mg of composite and resulting extracts were analysed by HPLC-UV method. The prepared composite represented a suitable adsorbent phase for magnetic dispersive micro-solid phase application.

Keywords: fluoroquinolones, magnetic dispersive micro extraction, nano-composite, self-doped polyaniline nanofibers

Procedia PDF Downloads 119
4564 Oral Betahistine Versus Intravenous Diazepam in Acute Peripheral Vertigo: A Randomized, Double-Blind Controlled Trial

Authors: Saeed Abbasi, Davood Farsi, Soudabeh Shafiee Ardestani, Neda Valizadeh

Abstract:

Objectives: Peripheral vertigo is a common complaint of patients who are visited in emergency departments. In our study, we wanted to evaluate the effect of betahistine as an oral drug vs. intravenous diazepam for the treatment of acute peripheral vertigo. We also wanted to see the possibility of substitution of parenteral drug with an oral one with fewer side effects. Materials and Methods: In this randomized, double-blind study, 101 patients were enrolled in the study. The patients were divided in two groups in a double-blind randomized manner. Group A took oral placebo and 10 mg of intravenous diazepam. Group B received 8mg of oral betahistine and intravenous placebo. Patients’ symptoms and signs (Vertigo severity, Nausea, Vomiting, Nistagmus and Gate) were evaluated after 0, 2, 4, 6 hours by emergency physicians and data were collected by a questionnaire. Results: In both groups, there was significant improvement in vertigo (betahistine group P=0.02 and Diazepam group P=0.03). Analysis showed more improvement in vertigo severity after 4 hours of treatment in betahistine group comparing to diazepam group (P=0.02). Nausea and vomiting were significantly lower in patients receiving diazepam after 2 and 6 hours (P=0.02 & P=0.03).No statistically significant differences were found between the groups in nistagmus, equilibrium & vertigo duration. Conclusion: The results of this randomized trial showed that both drugs had acceptable therapeutic effects in peripheral vertigo, although betahistine was significantly more efficacious after 4 hours of drug intake. As for higher nausea and vomiting in betahistine group, physician should consider these side effects before drug prescription.

Keywords: acute peripheral vertigo, betahistine, diazepam, emergency department

Procedia PDF Downloads 382
4563 Application of Liquid Emulsion Membrane Technique for the Removal of Cadmium(II) from Aqueous Solutions Using Aliquat 336 as a Carrier

Authors: B. Medjahed, M. A. Didi, B. Guezzen

Abstract:

In the present work, emulsion liquid membrane (ELM) technique was applied for the extraction of cadmium(II) present in aqueous samples. Aliquat 336 (Chloride tri-N-octylmethylammonium) was used as carrier to extract cadmium(II). The main objective of this work is to investigate the influence of various parameters affected the ELM formation and its stability and testing the performance of the prepared ELM on removal of cadmium by using synthetic solution with different concentrations. Experiments were conducted to optimize pH of the feed solution and it was found that cadmium(II) can be extracted at pH 6.5. The influence of the carrier concentration and treat ratio on the extraction process was investigated. The obtained results showed that the optimal values are respectively 3% (Aliquat 336) and a ratio (feed: emulsion) equal to 1:1.

Keywords: cadmium, carrier, emulsion liquid membrane, surfactant

Procedia PDF Downloads 402
4562 Iterative Panel RC Extraction for Capacitive Touchscreen

Authors: Chae Hoon Park, Jong Kang Park, Jong Tae Kim

Abstract:

Electrical characteristics of capacitive touchscreen need to be accurately analyzed to result in better performance for multi-channel capacitance sensing. In this paper, we extracted the panel resistances and capacitances of the touchscreen by comparing measurement data and model data. By employing a lumped RC model for driver-to-receiver paths in touchscreen, we estimated resistance and capacitance values according to the physical lengths of channel paths which are proportional to the RC model. As a result, we obtained the model having 95.54% accuracy of the measurement data.

Keywords: electrical characteristics of capacitive touchscreen, iterative extraction, lumped RC model, physical lengths of channel paths

Procedia PDF Downloads 331
4561 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: carbon stock, forest inventory, LiDAR, tree count

Procedia PDF Downloads 379
4560 Assisted Prediction of Hypertension Based on Heart Rate Variability and Improved Residual Networks

Authors: Yong Zhao, Jian He, Cheng Zhang

Abstract:

Cardiovascular diseases caused by hypertension are extremely threatening to human health, and early diagnosis of hypertension can save a large number of lives. Traditional hypertension detection methods require special equipment and are difficult to detect continuous blood pressure changes. In this regard, this paper first analyzes the principle of heart rate variability (HRV) and introduces sliding window and power spectral density (PSD) to analyze the time domain features and frequency domain features of HRV, and secondly, designs an HRV-based hypertension prediction network by combining Resnet, attention mechanism, and multilayer perceptron, which extracts the frequency domain through the improved ResNet18 features through a modified ResNet18, its fusion with time-domain features through an attention mechanism, and the auxiliary prediction of hypertension through a multilayer perceptron. Finally, the network was trained and tested using the publicly available SHAREE dataset on PhysioNet, and the test results showed that this network achieved 92.06% prediction accuracy for hypertension and outperformed K Near Neighbor(KNN), Bayes, Logistic, and traditional Convolutional Neural Network(CNN) models in prediction performance.

Keywords: feature extraction, heart rate variability, hypertension, residual networks

Procedia PDF Downloads 97
4559 Separation of Mercury(Ii) from Petroleum Produced Water via Hollow Fiber Supported Liquid Membrane and Mass Transfer Modeling

Authors: Srestha Chaturabul, Wanchalerm Srirachat, Thanaporn Wannachod, Prakorn Ramakul, Ura Pancharoen, Soorathep Kheawhom

Abstract:

The separation of mercury(II) from petroleum-produced water from the Gulf of Thailand was carried out using a hollow fiber supported liquid membrane system (HFSLM). Optimum parameters for feed pretreatment were 0.2 M HCl, 4% (v/v) Aliquat 336 for extractant and 0.1 M thiourea for stripping solution. The best percentage obtained for extraction was 99.73% and for recovery 90.11%, respectively. The overall separation efficiency noted was 94.92% taking account of both extraction and recovery prospects. The model for this separation developed along a combined flux principle i.e. convection–diffusion–kinetic. The results showed excellent agreement with theoretical data at an average standard deviation of 1.5% and 1.8%, respectively.

Keywords: separation, mercury(ii), petroleum produced water, hollow fiber, liquid membrane

Procedia PDF Downloads 293
4558 A Survey of Feature-Based Steganalysis for JPEG Images

Authors: Syeda Mainaaz Unnisa, Deepa Suresh

Abstract:

Due to the increase in usage of public domain channels, such as the internet, and communication technology, there is a concern about the protection of intellectual property and security threats. This interest has led to growth in researching and implementing techniques for information hiding. Steganography is the art and science of hiding information in a private manner such that its existence cannot be recognized. Communication using steganographic techniques makes not only the secret message but also the presence of hidden communication, invisible. Steganalysis is the art of detecting the presence of this hidden communication. Parallel to steganography, steganalysis is also gaining prominence, since the detection of hidden messages can prevent catastrophic security incidents from occurring. Steganalysis can also be incredibly helpful in identifying and revealing holes with the current steganographic techniques, which makes them vulnerable to attacks. Through the formulation of new effective steganalysis methods, further research to improve the resistance of tested steganography techniques can be developed. Feature-based steganalysis method for JPEG images calculates the features of an image using the L1 norm of the difference between a stego image and the calibrated version of the image. This calibration can help retrieve some of the parameters of the cover image, revealing the variations between the cover and stego image and enabling a more accurate detection. Applying this method to various steganographic schemes, experimental results were compared and evaluated to derive conclusions and principles for more protected JPEG steganography.

Keywords: cover image, feature-based steganalysis, information hiding, steganalysis, steganography

Procedia PDF Downloads 212
4557 Contactless Attendance System along with Temperature Monitoring

Authors: Nalini C. Iyer, Shraddha H., Anagha B. Varahamurthy, Dikshith C. S., Ishwar G. Kubasad, Vinayak I. Karalatti, Pavan B. Mulimani

Abstract:

The current scenario of the pandemic due to COVID-19 has led to the awareness among the people to avoid unneces-sary contact in public places. There is a need to avoid contact with physical objects to stop the spreading of infection. The contactless feature has to be included in the systems in public places wherever possible. For example, attendance monitoring systems with fingerprint biometric can be replaced with a contactless feature. One more important protocol followed in the current situation is temperature monitoring and screening. The paper describes an attendance system with a contactless feature and temperature screening for the university. The system displays a QR code to scan, which redirects to the student login web page only if the location is valid (the location where the student scans the QR code should be the location of the display of the QR code). Once the student logs in, the temperature of the student is scanned by the contactless temperature sensor (mlx90614) with an error of 0.5°C. If the temperature falls in the range of the desired value (range of normal body temperature), then the attendance of the student is marked as present, stored in the database, and the door opens automatically. The attendance is marked as absent in the other case, alerted with the display of temperature, and the door remains closed. The door is automated with the help of a servomotor. To avoid the proxy, IR sensors are used to count the number of students in the classroom. The hardware system consisting of a contactless temperature sensor and IR sensor is implemented on the microcontroller, NodeMCU.

Keywords: NodeMCU, IR sensor, attendance monitoring, contactless, temperature

Procedia PDF Downloads 182
4556 On Exploring Search Heuristics for improving the efficiency in Web Information Extraction

Authors: Patricia Jiménez, Rafael Corchuelo

Abstract:

Nowadays the World Wide Web is the most popular source of information that relies on billions of on-line documents. Web mining is used to crawl through these documents, collect the information of interest and process it by applying data mining tools in order to use the gathered information in the best interest of a business, what enables companies to promote theirs. Unfortunately, it is not easy to extract the information a web site provides automatically when it lacks an API that allows to transform the user-friendly data provided in web documents into a structured format that is machine-readable. Rule-based information extractors are the tools intended to extract the information of interest automatically and offer it in a structured format that allow mining tools to process it. However, the performance of an information extractor strongly depends on the search heuristic employed since bad choices regarding how to learn a rule may easily result in loss of effectiveness and/or efficiency. Improving search heuristics regarding efficiency is of uttermost importance in the field of Web Information Extraction since typical datasets are very large. In this paper, we employ an information extractor based on a classical top-down algorithm that uses the so-called Information Gain heuristic introduced by Quinlan and Cameron-Jones. Unfortunately, the Information Gain relies on some well-known problems so we analyse an intuitive alternative, Termini, that is clearly more efficient; we also analyse other proposals in the literature and conclude that none of them outperforms the previous alternative.

Keywords: information extraction, search heuristics, semi-structured documents, web mining.

Procedia PDF Downloads 330