Search results for: domain decomposition
1724 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna
Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov
Abstract:
This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna
Procedia PDF Downloads 2821723 Wavelet Based Advanced Encryption Standard Algorithm for Image Encryption
Authors: Ajish Sreedharan
Abstract:
With the fast evolution of digital data exchange, security information becomes much important in data storage and transmission. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. As encryption process is applied to the whole image in AES ,it is difficult to improve the efficiency. In this paper, wavelet decomposition is used to concentrate the main information of image to the low frequency part. Then, AES encryption is applied to the low frequency part. The high frequency parts are XORed with the encrypted low frequency part and a wavelet reconstruction is applied. Theoretical analysis and experimental results show that the proposed algorithm has high efficiency, and satisfied security suits for image data transmission.Keywords: discrete wavelet transforms, AES, dynamic SBox
Procedia PDF Downloads 4301722 Intelligent Fault Diagnosis for the Connection Elements of Modular Offshore Platforms
Authors: Jixiang Lei, Alexander Fuchs, Franz Pernkopf, Katrin Ellermann
Abstract:
Within the Space@Sea project, funded by the Horizon 2020 program, an island consisting of multiple platforms was designed. The platforms are connected by ropes and fenders. The connection is critical with respect to the safety of the whole system. Therefore, fault detection systems are investigated, which could detect early warning signs for a possible failure in the connection elements. Previously, a model-based method called Extended Kalman Filter was developed to detect the reduction of rope stiffness. This method detected several types of faults reliably, but some types of faults were much more difficult to detect. Furthermore, the model-based method is sensitive to environmental noise. When the wave height is low, a long time is needed to detect a fault and the accuracy is not always satisfactory. In this sense, it is necessary to develop a more accurate and robust technique that can detect all rope faults under a wide range of operational conditions. Inspired by this work on the Space at Sea design, we introduce a fault diagnosis method based on deep neural networks. Our method cannot only detect rope degradation by using the acceleration data from each platform but also estimate the contributions of the specific acceleration sensors using methods from explainable AI. In order to adapt to different operational conditions, the domain adaptation technique DANN is applied. The proposed model can accurately estimate rope degradation under a wide range of environmental conditions and help users understand the relationship between the output and the contributions of each acceleration sensor.Keywords: fault diagnosis, deep learning, domain adaptation, explainable AI
Procedia PDF Downloads 1791721 A Tool for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for the easy creation of an institutional risk profile for endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support risk factors set up with just the most important values that are important for a particular organisation. Subsequently, the risk profile employs fuzzy models and associated configurations for the file format metadata aggregator to support digital preservation experts with a semi-automatic estimation of endangerment level for file formats. Our goal is to make use of a domain expert knowledge base aggregated from a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation and analysis of risk factors for a requried dimension. The proposed methods improve the visibility of risk factor information and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and automatically aggregated file format metadata from linked open data sources. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: digital information management, file format, endangerment analysis, fuzzy models
Procedia PDF Downloads 4021720 An Efficient Approach for Speed up Non-Negative Matrix Factorization for High Dimensional Data
Authors: Bharat Singh Om Prakash Vyas
Abstract:
Now a day’s applications deal with High Dimensional Data have tremendously used in the popular areas. To tackle with such kind of data various approached has been developed by researchers in the last few decades. To tackle with such kind of data various approached has been developed by researchers in the last few decades. One of the problems with the NMF approaches, its randomized valued could not provide absolute optimization in limited iteration, but having local optimization. Due to this, we have proposed a new approach that considers the initial values of the decomposition to tackle the issues of computationally expensive. We have devised an algorithm for initializing the values of the decomposed matrix based on the PSO (Particle Swarm Optimization). Through the experimental result, we will show the proposed method converse very fast in comparison to other row rank approximation like simple NMF multiplicative, and ACLS techniques.Keywords: ALS, NMF, high dimensional data, RMSE
Procedia PDF Downloads 3401719 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization
Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon
Abstract:
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization
Procedia PDF Downloads 4451718 Quality of Life Among People with Mental Illness Attending a Psychiatric Outpatient Clinic in Ethiopia: A Structural Equation Model
Authors: Wondale Getinet Alemu, Lillian Mwanri, Clemence Due, Telake Azale, Anna Ziersch
Abstract:
Background: Mental illness is one of the most severe, chronic, and disabling public health problems that affect patients' Quality of life (QoL). Improving the QoL for people with mental illness is one of the most critical steps in stopping disease progression and avoiding complications of mental illness. Therefore, we aimed to assess the QoL and its determinants in patients with mental illness in outpatient clinics in Northwest Ethiopia in 2023. Methods: A facility-based cross-sectional study was conducted among people with mental illness in an outpatient clinic in Ethiopia. The sampling interval was decided by dividing the total number of study participants who had a follow-up appointment during the data collection period (2400) by the total sample size of 638, with the starting point selected by lottery method. The interviewer-administered WHOQOL BREF-26 tool was used to measure the QoL of people with mental illness. The domains and Health-Related Quality of Life (HRQoL) were identified. The indirect and direct effects of variables were calculated using structural equation modeling with SPSS-28 and Amos-28 software. A p-value of < 0.05 and a 95% CI were used to evaluate statistical significance. Results: A total of 636 (99.7%) participants responded and completed the WHOQOL-BREF questionnaire. The mean score of overall HRQoL of people with mental illness in the outpatient clinic was (49.6 ± 10 Sd). The highest QoL was found in the physical health domain (50.67 ±9.5 Sd), and the lowest mean QoL was found in the psychological health domain (48.41±10 Sd). Rural residents, drug nonadherence, suicidal ideation, not getting counseling, moderate or severe subjective severity, the family does not participate in patient care, and a family history of mental illness had an indirect negative effect on HRQoL. Alcohol use and psychological health domain had a direct positive effect on QoL. Furthermore, objective severity of illness, having low self-esteem, and having a history of mental illness in the family had both direct and indirect effects on QoL. Furthermore, sociodemographic factors (residence, educational status, marital status), social support-related factors (self-esteem, family not participating in patient care), substance use factors (alcohol use, tobacco use,) and clinical factors (objective and subjective severity of illness, not getting counseling, suicidal ideation, number of episodes, comorbid illness, family history of mental illness, poor drug adherence) directly and indirectly affected QoL. Conclusions: In this study, the QoL of people with mental illness was poor, with the psychological health domain being the most affected. Sociodemographic factors, social support-related factors, drug use factors, and clinical factors directly and indirectly, affect QoL through the mediator variables of physical health domains, psychological health domains, social relation health domains, and environmental health domains. In order to improve the QoL of people with mental illnesses, we recommend that emphasis be given to addressing the scourge of mental health, including the development of policy and practice drivers that address the above-identified factors.Keywords: quality of life, mental wellbeing, mental illness, mental disorder, Ethiopia
Procedia PDF Downloads 791717 Development of a CFD Model for PCM Based Energy Storage in a Vertical Triplex Tube Heat Exchanger
Authors: Pratibha Biswal, Suyash Morchhale, Anshuman Singh Yadav, Shubham Sanjay Chobe
Abstract:
Energy demands are increasing whereas energy sources, especially non-renewable sources are limited. Due to the intermittent nature of renewable energy sources, it has become the need of the hour to find new ways to store energy. Out of various energy storage methods, latent heat thermal storage devices are becoming popular due to their high energy density per unit mass and volume at nearly constant temperature. This work presents a computational fluid dynamics (CFD) model using ANSYS FLUENT 19.0 for energy storage characteristics of a phase change material (PCM) filled in a vertical triplex tube thermal energy storage system. A vertical triplex tube heat exchanger, just like its name consists of three concentric tubes (pipe sections) for parting the device into three fluid domains. The PCM is filled in the middle domain with heat transfer fluids flowing in the outer and innermost domains. To enhance the heat transfer inside the PCM, eight fins have been incorporated between the internal and external tubes. These fins run radially outwards from the outer-wall of innermost tube to the inner-wall of the middle tube dividing the middle domain (between innermost and middle tube) into eight sections. These eight sections are then filled with a PCM. The validation is carried with earlier work and a grid independence test is also presented. Further studies on freezing and melting process were carried out. The results are presented in terms of pictorial representation of isotherms and liquid fractionKeywords: heat exchanger, thermal energy storage, phase change material, CFD, latent heat
Procedia PDF Downloads 1521716 Social-Cognitive Aspects of Interpretation: Didactic Approaches in Language Processing and English as a Second Language Difficulties in Dyslexia
Authors: Schnell Zsuzsanna
Abstract:
Background: The interpretation of written texts, language processing in the visual domain, in other words, atypical reading abilities, also known as dyslexia, is an ever-growing phenomenon in today’s societies and educational communities. The much-researched problem affects cognitive abilities and, coupled with normal intelligence normally manifests difficulties in the differentiation of sounds and orthography and in the holistic processing of written words. The factors of susceptibility are varied: social, cognitive psychological, and linguistic factors interact with each other. Methods: The research will explain the psycholinguistics of dyslexia on the basis of several empirical experiments and demonstrate how domain-general abilities of inhibition, retrieval from the mental lexicon, priming, phonological processing, and visual modality transfer affect successful language processing and interpretation. Interpretation of visual stimuli is hindered, and the problem seems to be embedded in a sociocultural, psycholinguistic, and cognitive background. This makes the picture even more complex, suggesting that the understanding and resolving of the issues of dyslexia has to be interdisciplinary, aided by several disciplines in the field of humanities and social sciences, and should be researched from an empirical approach, where the practical, educational corollaries can be analyzed on an applied basis. Aim and applicability: The lecture sheds light on the applied, cognitive aspects of interpretation, social cognitive traits of language processing, the mental underpinnings of cognitive interpretation strategies in different languages (namely, Hungarian and English), offering solutions with a few applied techniques for success in foreign language learning that can be useful advice for the developers of testing methodologies and measures across ESL teaching and testing platforms.Keywords: dyslexia, social cognition, transparency, modalities
Procedia PDF Downloads 831715 Optimization of Slider Crank Mechanism Using Design of Experiments and Multi-Linear Regression
Authors: Galal Elkobrosy, Amr M. Abdelrazek, Bassuny M. Elsouhily, Mohamed E. Khidr
Abstract:
Crank shaft length, connecting rod length, crank angle, engine rpm, cylinder bore, mass of piston and compression ratio are the inputs that can control the performance of the slider crank mechanism and then its efficiency. Several combinations of these seven inputs are used and compared. The throughput engine torque predicted by the simulation is analyzed through two different regression models, with and without interaction terms, developed according to multi-linear regression using LU decomposition to solve system of algebraic equations. These models are validated. A regression model in seven inputs including their interaction terms lowered the polynomial degree from 3rd degree to 1st degree and suggested valid predictions and stable explanations.Keywords: design of experiments, regression analysis, SI engine, statistical modeling
Procedia PDF Downloads 1841714 From the Sharing Economy to Social Manufacturing: Analyzing Collaborative Service Networks in the Manufacturing Domain
Authors: Babak Mohajeri
Abstract:
In recent years, the conventional business model of ownership has been changed towards accessibility in a variety of markets. Two trends can be observed in the evolution of this rental-like business model. Firstly, the technological development that enables the emergence of new business models. These new business models increasingly become agile and flexible. For example Spotify, an online music stream company provides consumers access to over millions of music tracks, conveniently through the smartphone, tablet or computer. Similarly, Car2Go, the car sharing company accesses its members with flexible and nearby sharing cars. The second trend is the increasing communication and connections via social networks. This trend enables a shift to peer-to-peer accessibility based business models. Conventionally, companies provide access for their customers to own companies products or services. In peer-to-peer model, nonetheless, companies facilitate access and connection across their customers to use other customers owned property or skills, competencies or services .The is so-called the sharing economy business model. The aim of this study is to investigate into a new and emerging type of the sharing economy model in which role of customers and service providers may dramatically change. This new model is called Collaborative Service Networks. We propose a mechanism for Collaborative Service Networks business model. Uber and Airbnb, two successful growing companies, have been selected for our case studies and their business models are analyzed. Finally, we study the emergence of the collaborative service networks in the manufacturing domain. Our finding results to a new manufacturing paradigm called social manufacturing.Keywords: sharing economy, collaborative service networks, social manufacturing, manufacturing development
Procedia PDF Downloads 3171713 The Role of a Novel DEAD-Box Containing Protein in NLRP3 Inflammasome Activation
Authors: Yi-Hui Lai, Chih-Hsiang Yang, Li-Chung Hsu
Abstract:
The inflammasome is a protein complex that modulates caspase-1 activity, resulting in proteolytic cleavage of proinflammatory cytokines such as IL-1β and IL-18, into their bioactive forms. It has been shown that the inflammasomes play a crucial role in the clearance of pathogenic infection and tissue repair. However, dysregulated inflammasome activation contributes to a wide range of human diseases such as cancers and auto-inflammatory diseases. Yet, regulation of NLRP3 inflammasome activation remains largely unknown. We discovered a novel DEAD box protein, whose biological function has not been reported, not only negatively regulates NLRP3 inflammasome activation by interfering NLRP3 inflammasome assembly and cellular localization but also mitigate pyroptosis upon pathogen evasion. The DEAD-box protein is the first DEAD-box protein gets involved in modulation of the inflammasome activation. In our study, we found that caspase-1 activation and mature IL-1β production were largely enhanced upon LPS challenge in the DEAD box-containing protein- deleted THP-1 macrophages and bone marrow-derived macrophages (BMDMs). In addition, this DEAD box-containing protein migrates from the nucleus to the cytoplasm upon LPS stimulation, which is required for its inhibitory role in NLRP3 inflammasome activation. The DEAD box-containing protein specifically interacted with the LRR motif of NLRP3 via its DEAD domain. Furthermore, due to the crucial role of the NLRP3 LRR domain in the recruitment of NLRP3 to mitochondria and binding to its adaptor ASC, we found that the interaction of NLRP3 and ASC was downregulated in the presence of the DEAD box-containing protein. In addition to the mechanical study, we also found that this DEAD box protein protects host cells from inflammasome-triggered cell death in response to broad-ranging pathogens such as Candida albicans, Streptococcus pneumoniae, etc., involved in nosocomial infections and severe fever shock. Collectively, our results suggest that this novel DEAD box molecule might be a key therapeutic strategy for various infectious diseases.Keywords: inflammasome, inflammation, innate immunity, pyroptosis
Procedia PDF Downloads 2811712 Genomic and Evolutionary Diversity of Long Terminal Repeat (LTR) Retrotransposons in Date Palm (Phoenix dactylifera)
Authors: Faisal Nouroz, Mukaramin Mukaramin
Abstract:
Of the transposable elements (TEs), the retrotransposons are the most copious elements identified from many sequenced genomes. They have played a major role in genome evolution, rearrangement, and expansions based on their copy and paste mode of proliferation. They are further divided into LTR and Non-LTR retrotransposons. The purpose of the current study was to identify the LTR REs in sequenced Phoenix dactylifera genome and to study their structural diversity. A total of 150 P. dactylifera BAC sequences with > 60kb sizes were randomly retrieved from National Center for Biotechnology Information (NCBI) database and screened for the presence of LTR retrotransposons. Seven bacterial artificial chromosomes (BAC) sequences showed full-length LTR Retrotransposons with 4 Copia and 3 Gypsy families having variable copy numbers in respective families. Reverse transcriptase (RT) domain was found as the most conserved domain among Copia and Gypsy superfamilies and was used to deduce evolutionary analysis. The amino acid residues among various RT sequences showed variability in their percentages indicating post divergence evolution. Amino acid Leucine was found in highest proportions followed by Lysine, while Methionine and Tryptophan were in lowest percentages. The phylogenetic analysis based on RT domains confirmed that although having most conserved RT regions, several evolutionary events occurred causing nucleotide polymorphisms and hence clustering of Gypsy and Copia superfamilies into their respective lineages. The study will be helpful in identification and annotation of these elements in other species and genera and their distribution patterns on chromosomes by fluorescent in situ hybridization techniques.Keywords: transposable elements, Phoenix dactylifera, retrotransposons, phylogenetic analysis
Procedia PDF Downloads 1271711 Voice of Customer: Mining Customers' Reviews on On-Line Car Community
Authors: Kim Dongwon, Yu Songjin
Abstract:
This study identifies the business value of VOC (Voice of Customer) on the business. Precisely, we intend to demonstrate how much negative and positive sentiment of VOC has an influence on car sales market share in the unites states. We extract 7 emotions such as sadness, shame, anger, fear, frustration, delight and satisfaction from the VOC data, 23,204 pieces of opinions, that had been posted on car-related on-line community from 2007 to 2009(a part of data collection from 2007 to 2015), and intend to clarify the correlation between negative and positive sentimental keywords and contribution to market share. In order to develop a lexicon for each category of negative and positive sentiment, we took advantage of Corpus program, Antconc 3.4.1.w and on-line sentimental data, SentiWordNet and identified the part of speech(POS) information of words in the customers' opinion by using a part-of-speech tagging function provided by TextAnalysisOnline. For the purpose of this present study, a total of 45,741 pieces of customers' opinions of 28 car manufacturing companies had been collected including titles and status information. We conducted an experiment to examine whether the inclusion, frequency and intensity of terms with negative and positive emotions in each category affect the adoption of customer opinions for vehicle organizations' market share. In the experiment, we statistically verified that there is correlation between customer ideas containing negative and positive emotions and variation of marker share. Particularly, "Anger," a domain of negative domains, is significantly influential to car sales market share. The domain "Delight" and "Satisfaction" increased in proportion to growth of market share.Keywords: data mining, opinion mining, sentiment analysis, VOC
Procedia PDF Downloads 2121710 Structural Characterization of TIR Domains Interaction
Authors: Sara Przetocka, Krzysztof Żak, Grzegorz Dubin, Tadeusz Holak
Abstract:
Toll-like receptors (TLRs) play central role in the innate immune response and inflammation by recognizing pathogen-associated molecular patterns (PAMPs). A fundamental basis of TLR signalling is dependent upon the recruitment and association of adaptor molecules that contain the structurally conserved Toll/interleukin-1 receptor (TIR) domain. MyD88 (myeloid differentiation primary response gene 88) is the universal adaptor for TLRs and cooperates with Mal (MyD88 adapter-like protein, also known as TIRAP) in TLR4 response which is predominantly used in inflammation, host defence and carcinogenesis. Up to date two possible models of MyD88, Mal and TLR4 interactions have been proposed. The aim of our studies is to confirm or abolish presented models and accomplish the full structural characterisation of TIR domains interaction. Using molecular cloning methods we obtained several construct of MyD88 and Mal TIR domain with GST or 6xHis tag. Gel filtration method as well as pull-down analysis confirmed that recombinant TIR domains from MyD88 and Mal are binding in complexes. To examine whether obtained complexes are homo- or heterodimers we carried out cross-linking reaction of TIR domains with BS3 compound combined with mass spectrometry. To investigate which amino acid residues are involved in this interaction the NMR titration experiments were performed. 15N MyD88-TIR solution was complemented with non-labelled Mal-TIR. The results undoubtedly indicate that MyD88-TIR interact with Mal-TIR. Moreover 2D spectra demonstrated that simultaneously Mal-TIR self-dimerization occurs which is necessary to create proper scaffold for Mal-TIR and MyD88-TIR interaction. Final step of this study will be crystallization of MyD88 and Mal TIR domains complex. This crystal structure and characterisation of its interface will have an impact in understanding the TLR signalling pathway and possibly will be used in development of new anti-cancer treatment.Keywords: cancer, MyD88, TIR domains, Toll-like receptors
Procedia PDF Downloads 2951709 Non-Destructive Technique for Detection of Voids in the IC Package Using Terahertz-Time Domain Spectrometer
Authors: Sung-Hyeon Park, Jin-Wook Jang, Hak-Sung Kim
Abstract:
In recent years, Terahertz (THz) time-domain spectroscopy (TDS) imaging method has been received considerable interest as a promising non-destructive technique for detection of internal defects. In comparison to other non-destructive techniques such as x-ray inspection method, scanning acoustic tomograph (SAT) and microwave inspection method, THz-TDS imaging method has many advantages: First, it can measure the exact thickness and location of defects. Second, it doesn’t require the liquid couplant while it is very crucial to deliver that power of ultrasonic wave in SAT method. Third, it didn’t damage to materials and be harmful to human bodies while x-ray inspection method does. Finally, it exhibits better spatial resolution than microwave inspection method. However, this technology couldn’t be applied to IC package because THz radiation can penetrate through a wide variety of materials including polymers and ceramics except of metals. Therefore, it is difficult to detect the defects in IC package which are composed of not only epoxy and semiconductor materials but also various metals such as copper, aluminum and gold. In this work, we proposed a special method for detecting the void in the IC package using THz-TDS imaging system. The IC package specimens for this study are prepared by Packaging Engineering Team in Samsung Electronics. Our THz-TDS imaging system has a special reflection mode called pitch-catch mode which can change the incidence angle in the reflection mode from 10 o to 70 o while the others have transmission and the normal reflection mode or the reflection mode fixed at certain angle. Therefore, to find the voids in the IC package, we investigated the appropriate angle as changing the incidence angle of THz wave emitter and detector. As the results, the voids in the IC packages were successfully detected using our THz-TDS imaging system.Keywords: terahertz, non-destructive technique, void, IC package
Procedia PDF Downloads 4711708 Mechanical Tests and Analyzes of Behaviors of High-Performance of Polyester Resins Reinforced With Unifilo Fiberglass
Authors: Băilă Diana Irinel, Păcurar Răzvan, Păcurar Ancuța
Abstract:
In the last years, composite materials are increasingly used in automotive, aeronautic, aerospace, construction applications. Composite materials have been used in aerospace in applications such as engine blades, brackets, interiors, nacelles, propellers/rotors, single aisle wings, wide body wings. The fields of use of composite materials have multiplied with the improvement of material properties, such as stability and adaptation to the environment, mechanical tests, wear resistance, moisture resistance, etc. The composite materials are classified concerning type of matrix materials, as metallic, polymeric and ceramic based composites and are grouped according to the reinforcement type as fibre, obtaining particulate and laminate composites. Production of a better material is made more likely by combining two or more materials with complementary properties. The best combination of strength and ductility may be accomplished in solids that consist of fibres embedded in a host material. Polyester is a suitable component for composite materials, as it adheres so readily to the particles, sheets, or fibres of the other components. The important properties of the reinforcing fibres are their high strength and high modulus of elasticity. For applications, as in automotive or in aeronautical domain, in which a high strength-to-weight ratio is important, non-metallic fibres such as fiberglass have a distinct advantage because of their low density. In general, the glass fibres content varied between 9 to 33% wt. in the composites. In this article, high-performance types of composite materials glass-epoxy and glass-polyester used in automotive domain will be analyzed, performing tensile and flexural tests and SEM analyzes.Keywords: glass-polyester composite, glass fibre, traction and flexion tests, SEM analyzes
Procedia PDF Downloads 1551707 Investigating Students' Understanding about Mathematical Concept through Concept Map
Authors: Rizky Oktaviana
Abstract:
The main purpose of studying lies in improving students’ understanding. Teachers usually use written test to measure students’ understanding about learning material especially mathematical learning material. This common method actually has a lack point, such that in mathematics content, written test only show procedural steps to solve mathematical problems. Therefore, teachers unable to see whether students actually understand about mathematical concepts and the relation between concepts or not. One of the best tools to observe students’ understanding about the mathematical concepts is concept map. The goal of this research is to describe junior high school students understanding about mathematical concepts through Concept Maps based on the difference of mathematical ability. There were three steps in this research; the first step was choosing the research subjects by giving mathematical ability test to students. The subjects of this research are three students with difference mathematical ability, high, intermediate and low mathematical ability. The second step was giving concept mapping training to the chosen subjects. The last step was giving concept mapping task about the function to the subjects. Nodes which are the representation of concepts of function were provided in concept mapping task. The subjects had to use the nodes in concept mapping. Based on data analysis, the result of this research shows that subject with high mathematical ability has formal understanding, due to that subject could see the connection between concepts of function and arranged the concepts become concept map with valid hierarchy. Subject with intermediate mathematical ability has relational understanding, because subject could arranged all the given concepts and gave appropriate label between concepts though it did not represent the connection specifically yet. Whereas subject with low mathematical ability has poor understanding about function, it can be seen from the concept map which is only used few of the given concepts because subject could not see the connection between concepts. All subjects have instrumental understanding for the relation between linear function concept, quadratic function concept and domain, co domain, range.Keywords: concept map, concept mapping, mathematical concepts, understanding
Procedia PDF Downloads 2701706 Best Practices and Recommendations for CFD Simulation of Hydraulic Spool Valves
Authors: Jérémy Philippe, Lucien Baldas, Batoul Attar, Jean-Charles Mare
Abstract:
The proposed communication deals with the research and development of a rotary direct-drive servo valve for aerospace applications. A key challenge of the project is to downsize the electromagnetic torque motor by reducing the torque required to drive the rotary spool. It is intended to optimize the spool and the sleeve geometries by combining a Computational Fluid Dynamics (CFD) approach with commercial optimization software. The present communication addresses an important phase of the project, which consists firstly of gaining confidence in the simulation results. It is well known that the force needed to pilot a sliding spool valve comes from several physical effects: hydraulic forces, friction and inertia/mass of the moving assembly. Among them, the flow force is usually a major contributor to the steady-state (or Root Mean Square) driving torque. In recent decades, CFD has gradually become a standard simulation tool for studying fluid-structure interactions. However, in the particular case of high-pressure valve design, the authors have experienced that the calculated overall hydraulic force depends on the parameterization and options used to build and run the CFD model. To solve this issue, the authors have selected the standard case of the linear spool valve, which is addressed in detail in numerous scientific references (analytical models, experiments, CFD simulations). The first CFD simulations run by the authors have shown that the evolution of the equivalent discharge coefficient vs. Reynolds number at the metering orifice corresponds well to the values that can be predicted by the classical analytical models. Oppositely, the simulated flow force was found to be quite different from the value calculated analytically. This drove the authors to investigate minutely the influence of the studied domain and the setting of the CFD simulation. It was firstly shown that the flow recirculates in the inlet and outlet channels if their length is not sufficient regarding their hydraulic diameter. The dead volume on the uncontrolled orifice side also plays a significant role. These examples highlight the influence of the geometry of the fluid domain considered. The second action was to investigate the influence of the type of mesh, the turbulence models and near-wall approaches, and the numerical solver and discretization scheme order. Two approaches were used to determine the overall hydraulic force acting on the moving spool. First, the force was deduced from the momentum balance on a control domain delimited by the valve inlet and outlet and the spool walls. Second, the overall hydraulic force was calculated from the integral of pressure and shear forces acting at the boundaries of the fluid domain. This underlined the significant contribution of the viscous forces acting on the spool between the inlet and outlet orifices, which are generally not considered in the literature. This also emphasized the influence of the choices made for the implementation of CFD calculation and results analysis. With the step-by-step process adopted to increase confidence in the CFD simulations, the authors propose a set of best practices and recommendations for the efficient use of CFD to design high-pressure spool valves.Keywords: computational fluid dynamics, hydraulic forces, servovalve, rotary servovalve
Procedia PDF Downloads 421705 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 751704 Contribution of NLRP3 Inflammasome to the Protective Effect of 5,14-HEDGE, A 20-HETE Mimetic, against LPS-Induced Septic Shock in Rats
Authors: Bahar Tunctan, Sefika Pinar Kucukkavruk, Meryem Temiz-Resitoglu, Demet Sinem Guden, Ayse Nihal Sari, Seyhan Sahan-Firat, Mahesh P. Paudyal, John R. Falck, Kafait U. Malik
Abstract:
We hypothesized that 20-hydroxyeicosatetraenoic acid (20-HETE) mimetics such as N-(20-hydroxyeicosa-5[Z],14[Z]-dienoyl)glycine (5,14-HEDGE) may be beneficial for preventing mortality due to inflammation induced by lipopolysaccharide (LPS). This study aims to assess the effect of 5,14-HEDGE on the LPS-induced changes in nucleotide binding domain and leucine-rich repeat protein 3 (NLRP3)/apoptosis-associated speck-like protein containing a caspase activation and recruitment domain (ASC)/pro-caspase-1 inflammasome. Rats were injected with saline (4 ml/kg) or LPS (10 mg/kg) at time 0. Blood pressure and heart rate were measured using a tail-cuff device. 5,14-HEDGE (30 mg/kg) was administered to rats 1 h after injection of saline or LPS. The rats were sacrificed 4 h after saline or LPS injection and kidney, heart, thoracic aorta, and superior mesenteric artery were isolated for measurement of caspase-1/11 p20, NLRP3, ASC, and β-actin proteins as well as interleukin-1β (IL-1β) levels. Blood pressure decreased by 33 mmHg and heart rate increased by 63 bpm in the LPS-treated rats. In the LPS-treated rats, tissue protein expression of caspase-1/11 p20, NLRP3, and ASC in addition to IL-1β levels were increased. 5,14-HEDGE prevented the LPS-induced changes. Our findings suggest that inhibition of renal, cardiac, and vascular formation/activity of NLRP3/ASC/pro-caspase-1 inflammasome involved in the protective effect of 5,14-HEDGE on LPS-induced septic shock in rats. This work was financially supported by the Mersin University (2015-AP3-1343) and USPHS NIH (PO1 HL034300).Keywords: 5, 14-HEDGE, lipopolysaccharide, NLRP3, inflammasome, septic shock
Procedia PDF Downloads 2931703 Mapping Intertidal Changes Using Polarimetry and Interferometry Techniques
Authors: Khalid Omari, Rene Chenier, Enrique Blondel, Ryan Ahola
Abstract:
Northern Canadian coasts have vulnerable and very dynamic intertidal zones with very high tides occurring in several areas. The impact of climate change presents challenges not only for maintaining this biodiversity but also for navigation safety adaptation due to the high sediment mobility in these coastal areas. Thus, frequent mapping of shorelines and intertidal changes is of high importance. To help in quantifying the changes in these fragile ecosystems, remote sensing provides practical monitoring tools at local and regional scales. Traditional methods based on high-resolution optical sensors are often used to map intertidal areas by benefiting of the spectral response contrast of intertidal classes in visible, near and mid-infrared bands. Tidal areas are highly reflective in visible bands mainly because of the presence of fine sand deposits. However, getting a cloud-free optical data that coincide with low tides in intertidal zones in northern regions is very difficult. Alternatively, the all-weather capability and daylight-independence of the microwave remote sensing using synthetic aperture radar (SAR) can offer valuable geophysical parameters with a high frequency revisit over intertidal zones. Multi-polarization SAR parameters have been used successfully in mapping intertidal zones using incoherence target decomposition. Moreover, the crustal displacements caused by ocean tide loading may reach several centimeters that can be detected and quantified across differential interferometric synthetic aperture radar (DInSAR). Soil moisture change has a significant impact on both the coherence and the backscatter. For instance, increases in the backscatter intensity associated with low coherence is an indicator for abrupt surface changes. In this research, we present primary results obtained following our investigation of the potential of the fully polarimetric Radarsat-2 data for mapping an inter-tidal zone located on Tasiujaq on the south-west shore of Ungava Bay, Quebec. Using the repeat pass cycle of Radarsat-2, multiple seasonal fine quad (FQ14W) images are acquired over the site between 2016 and 2018. Only 8 images corresponding to low tide conditions are selected and used to build an interferometric stack of data. The observed displacements along the line of sight generated using HH and VV polarization are compared with the changes noticed using the Freeman Durden polarimetric decomposition and Touzi degree of polarization extrema. Results show the consistency of both approaches in their ability to monitor the changes in intertidal zones.Keywords: SAR, degree of polarization, DInSAR, Freeman-Durden, polarimetry, Radarsat-2
Procedia PDF Downloads 1361702 Economic Evaluation of Biogas and Biomethane from Animal Manure
Authors: Shahab Shafayyan, Tara Naderi
Abstract:
Biogas is the product of decomposition of organic materials. A variety of sources, including animal wastes, municipal solid wastes, sewage and agricultural wastes may be used to produce biogas in an anaerobic process. The main forming material of biogas is methane gas, which can be used directly in a variety of ways, such as heating and as fuel, which is very common in a number of countries, such as China and India. In this article, the cost of biogas production from animal fertilizers, and its refined form, bio methane gas has been studied and it is shown that it can be an alternative for natural gas in terms of costs, in the near future. The cost of biogas purification to biomethane is more than three times the cost of biogas production for an average unit. Biomethane production costs, calculated for a small unit, is about $9/MMBTU and for an average unit is about $5.9/MMBTU.Keywords: biogas, biomethane, anaerobic digestion, economic evaluation
Procedia PDF Downloads 4841701 A Comprehensive Study and Evaluation on Image Fashion Features Extraction
Authors: Yuanchao Sang, Zhihao Gong, Longsheng Chen, Long Chen
Abstract:
Clothing fashion represents a human’s aesthetic appreciation towards everyday outfits and appetite for fashion, and it reflects the development of status in society, humanity, and economics. However, modelling fashion by machine is extremely challenging because fashion is too abstract to be efficiently described by machines. Even human beings can hardly reach a consensus about fashion. In this paper, we are dedicated to answering a fundamental fashion-related problem: what image feature best describes clothing fashion? To address this issue, we have designed and evaluated various image features, ranging from traditional low-level hand-crafted features to mid-level style awareness features to various current popular deep neural network-based features, which have shown state-of-the-art performance in various vision tasks. In summary, we tested the following 9 feature representations: color, texture, shape, style, convolutional neural networks (CNNs), CNNs with distance metric learning (CNNs&DML), AutoEncoder, CNNs with multiple layer combination (CNNs&MLC) and CNNs with dynamic feature clustering (CNNs&DFC). Finally, we validated the performance of these features on two publicly available datasets. Quantitative and qualitative experimental results on both intra-domain and inter-domain fashion clothing image retrieval showed that deep learning based feature representations far outweigh traditional hand-crafted feature representation. Additionally, among all deep learning based methods, CNNs with explicit feature clustering performs best, which shows feature clustering is essential for discriminative fashion feature representation.Keywords: convolutional neural network, feature representation, image processing, machine modelling
Procedia PDF Downloads 1381700 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 801699 Asymptotic Spectral Theory for Nonlinear Random Fields
Authors: Karima Kimouche
Abstract:
In this paper, we consider the asymptotic problems in spectral analysis of stationary causal random fields. We impose conditions only involving (conditional) moments, which are easily verifiable for a variety of nonlinear random fields. Limiting distributions of periodograms and smoothed periodogram spectral density estimates are obtained and applications to the spectral domain bootstrap are given.Keywords: spatial nonlinear processes, spectral estimators, GMC condition, bootstrap method
Procedia PDF Downloads 4481698 Flame Retardant Study of Methylol Melamine Phosphate-Treated Cotton Fibre
Authors: Nurudeen Afolami Ayeni, Kasali Bello
Abstract:
Methylolmelamine with increasing degree of methylol substitution and the phosphates derivatives were used to resinate cotton fabric (CF). The resination was carried out at different curing time and curing temperature. Generally, the results show a reduction in the flame propagation rate of the treated fabrics compared to the untreated cotton fabric (CF). While the flame retardancy of methylolmelamine-treated fibre could be attributed to the degree of crosslinking of fibre-resin network which promotes stability, the methylolmelamine phosphate-treated fabrics show better retardancy due to the intumescences action of the phosphate resin upon decomposition in the resin – fabric network.Keywords: cotton fabric, flame retardant, methylolmelamine, crosslinking, resination
Procedia PDF Downloads 3841697 Preparation and Characterization of Polyaniline (PANI) – Platinum Nanocomposite
Authors: Kumar Neeraj, Ranjan Haldar, Ashok Srivastava
Abstract:
Polyaniline used as light-emitting devices (LEDs), televisions, cellular telephones, automotive, Corrosion-resistant coatings, actuators and ability to have micro- and nano-devices. the electrical conductivity properties can be increased by introduction of metal nano particles. In the present study, platinum nano particles have been utilized to achieve the improved properties. Polyaniline and Pt-polyaniline composite are synthesized by chemical routes. The samples characterized by X-ray diffractometer show the amorphous nature of polyaniline and Pt-polyaniline composite. The Bragg’s diffraction peaks correspond to platinum nano particles and thermogravimetric analyzer predicts its decomposition at certain temperature. The current-potential characteristics of the samples are also studied which indicate a significant increasing the value of conductivity after introduction of pt nanoparticles in the matrix of polyaniline (PANI).Keywords: polyaniline, XRD and platinum nanoparticles, characterization, pharmaceutical sciences
Procedia PDF Downloads 5401696 Hot Deformability of Si-Steel Strips Containing Al
Authors: Mohamed Yousef, Magdy Samuel, Maha El-Meligy, Taher El-Bitar
Abstract:
The present work is dealing with 2% Si-steel alloy. The alloy contains 0.05% C as well as 0.85% Al. The alloy under investigation would be used for electrical transformation purposes. A heating (expansion) - cooling (contraction) dilation investigation was executed to detect the a, a+g, and g transformation temperatures at the inflection points of the dilation curve. On heating, primary a was detected at a temperature range between room temperature and 687 oC. The domain of a+g was detected in the range between 687 oC and 746 oC. g phase exists in the closed g region at the range between 746 oC and 1043 oC. The domain of a phase appears again at a temperature range between 1043 and 1105 oC, and followed by secondary a at temperature higher than 1105 oC. A physical simulation of thermo-mechanical processing on the as-cast alloy was carried out. The simulation process took into consideration the hot flat rolling pilot plant parameters. The process was executed on the thermo-mechanical simulator (Gleeble 3500). The process was designed to include seven consecutive passes. The 1st pass represents the roughing stage, while the remaining six passes represent finish rolling stage. The whole process was executed at the temperature range from 1100 oC to 900 oC. The amount of strain starts with 23.5% at the roughing pass and decreases continuously to reach 7.5 % at the last finishing pass. The flow curve of the alloy can be abstracted from the stress-strain curves representing simulated passes. It shows alloy hardening from a pass to the other up to pass no. 6, as a result of decreasing the deformation temperature and increasing of cumulative strain. After pass no. 6, the deformation process enhances the dynamic recrystallization phenomena to appear, where the z-parameter would be high.Keywords: si- steel, hot deformability, critical transformation temperature, physical simulation, thermo-mechanical processing, flow curve, dynamic softening.
Procedia PDF Downloads 2441695 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis
Procedia PDF Downloads 151