Search results for: computational Physics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2412

Search results for: computational Physics

462 Effects of the Air Supply Outlets Geometry on Human Comfort inside Living Rooms: CFD vs. ADPI

Authors: Taher M. Abou-deif, Esmail M. El-Bialy, Essam E. Khalil

Abstract:

The paper is devoted to numerically investigating the influence of the air supply outlets geometry on human comfort inside living looms. A computational fluid dynamics model is developed to examine the air flow characteristics of a room with different supply air diffusers. The work focuses on air flow patterns, thermal behavior in the room with few number of occupants. As an input to the full-scale 3-D room model, a 2-D air supply diffuser model that supplies direction and magnitude of air flow into the room is developed. Air distribution effect on thermal comfort parameters was investigated depending on changing the air supply diffusers type, angles and velocity. Air supply diffusers locations and numbers were also investigated. The pre-processor Gambit is used to create the geometric model with parametric features. Commercially available simulation software “Fluent 6.3” is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of air flow distribution. Turbulence effects of the flow are represented by the well-developed two equation turbulence model. In this work, the so-called standard k-ε turbulence model, one of the most widespread turbulence models for industrial applications, was utilized. Basic parameters included in this work are air dry bulb temperature, air velocity, relative humidity and turbulence parameters are used for numerical predictions of indoor air distribution and thermal comfort. The thermal comfort predictions through this work were based on ADPI (Air Diffusion Performance Index),the PMV (Predicted Mean Vote) model and the PPD (Percentage People Dissatisfied) model, the PMV and PPD were estimated using Fanger’s model.

Keywords: thermal comfort, Fanger's model, ADPI, energy effeciency

Procedia PDF Downloads 395
461 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time

Procedia PDF Downloads 260
460 Improvement Performances of the Supersonic Nozzles at High Temperature Type Minimum Length Nozzle

Authors: W. Hamaidia, T. Zebbiche

Abstract:

This paper presents the design of axisymmetric supersonic nozzles, in order to accelerate a supersonic flow to the desired Mach number and that having a small weight, in the same time gives a high thrust. The concerned nozzle gives a parallel and uniform flow at the exit section. The nozzle is divided into subsonic and supersonic regions. The supersonic portion is independent to the upstream conditions of the sonic line. The subsonic portion is used to give a sonic flow at the throat. In this case, nozzle gives a uniform and parallel flow at the exit section. It’s named by minimum length Nozzle. The study is done at high temperature, lower than the dissociation threshold of the molecules, in order to improve the aerodynamic performances. Our aim consists of improving the performances both by the increase of exit Mach number and the thrust coefficient and by reduction of the nozzle's mass. The variation of the specific heats with the temperature is considered. The design is made by the Method of Characteristics. The finite differences method with predictor-corrector algorithm is used to make the numerical resolution of the obtained nonlinear algebraic equations. The application is for air. All the obtained results depend on three parameters which are exit Mach number, the stagnation temperature, the chosen mesh in characteristics. A numerical simulation of nozzle through Computational Fluid Dynamics-FASTRAN was done to determine and to confirm the necessary design parameters.

Keywords: flux supersonic flow, axisymmetric minimum length nozzle, high temperature, method of characteristics, calorically imperfect gas, finite difference method, trust coefficient, mass of the nozzle, specific heat at constant pressure, air, error

Procedia PDF Downloads 134
459 Performance Comparison of Deep Convolutional Neural Networks for Binary Classification of Fine-Grained Leaf Images

Authors: Kamal KC, Zhendong Yin, Dasen Li, Zhilu Wu

Abstract:

Intra-plant disease classification based on leaf images is a challenging computer vision task due to similarities in texture, color, and shape of leaves with a slight variation of leaf spot; and external environmental changes such as lighting and background noises. Deep convolutional neural network (DCNN) has proven to be an effective tool for binary classification. In this paper, two methods for binary classification of diseased plant leaves using DCNN are presented; model created from scratch and transfer learning. Our main contribution is a thorough evaluation of 4 networks created from scratch and transfer learning of 5 pre-trained models. Training and testing of these models were performed on a plant leaf images dataset belonging to 16 distinct classes, containing a total of 22,265 images from 8 different plants, consisting of a pair of healthy and diseased leaves. We introduce a deep CNN model, Optimized MobileNet. This model with depthwise separable CNN as a building block attained an average test accuracy of 99.77%. We also present a fine-tuning method by introducing the concept of a convolutional block, which is a collection of different deep neural layers. Fine-tuned models proved to be efficient in terms of accuracy and computational cost. Fine-tuned MobileNet achieved an average test accuracy of 99.89% on 8 pairs of [healthy, diseased] leaf ImageSet.

Keywords: deep convolution neural network, depthwise separable convolution, fine-grained classification, MobileNet, plant disease, transfer learning

Procedia PDF Downloads 166
458 ADP Approach to Evaluate the Blood Supply Network of Ontario

Authors: Usama Abdulwahab, Mohammed Wahab

Abstract:

This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.

Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem

Procedia PDF Downloads 488
457 TAXAPRO, A Streamlined Pipeline to Analyze Shotgun Metagenomes

Authors: Sofia Sehli, Zainab El Ouafi, Casey Eddington, Soumaya Jbara, Kasambula Arthur Shem, Islam El Jaddaoui, Ayorinde Afolayan, Olaitan I. Awe, Allissa Dillman, Hassan Ghazal

Abstract:

The ability to promptly sequence whole genomes at a relatively low cost has revolutionized the way we study the microbiome. Microbiologists are no longer limited to studying what can be grown in a laboratory and instead are given the opportunity to rapidly identify the makeup of microbial communities in a wide variety of environments. Analyzing whole genome sequencing (WGS) data is a complex process that involves multiple moving parts and might be rather unintuitive for scientists that don’t typically work with this type of data. Thus, to help lower the barrier for less-computationally inclined individuals, TAXAPRO was developed at the first Omics Codeathon held virtually by the African Society for Bioinformatics and Computational Biology (ASBCB) in June 2021. TAXAPRO is an advanced metagenomics pipeline that accurately assembles organelle genomes from whole-genome sequencing data. TAXAPRO seamlessly combines WGS analysis tools to create a pipeline that automatically processes raw WGS data and presents organism abundance information in both a tabular and graphical format. TAXAPRO was evaluated using COVID-19 patient gut microbiome data. Analysis performed by TAXAPRO demonstrated a high abundance of Clostridia and Bacteroidia genera and a low abundance of Proteobacteria genera relative to others in the gut microbiome of patients hospitalized with COVID-19, consistent with the original findings derived using a different analysis methodology. This provides crucial evidence that the TAXAPRO workflow dispenses reliable organism abundance information overnight without the hassle of performing the analysis manually.

Keywords: metagenomics, shotgun metagenomic sequence analysis, COVID-19, pipeline, bioinformatics

Procedia PDF Downloads 186
456 Computation of Residual Stresses in Human Face Due to Growth

Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan

Abstract:

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of the living tissues to the mechanical loads is necessary for a wide range of developing fields such as, designing of prosthetics and optimized surgery operations. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically growth and remodeling is one of the main sources. Extracting body organs from medical imaging, does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is the gravity since an organ grows under its influence from its birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. In this paper, we have implemented a computational framework based on fixed-point iteration to determine the residual stresses due to growth. Using nonlinear continuum mechanics and the concept of fictitious configuration we find the unknown stress-free reference configuration which is necessary for mechanical analysis. To illustrate the method, we apply it to a finite element model of healthy human face whose geometry has been extracted from medical images. We have computed the distribution of residual stress in facial tissues, which can overcome the effect of gravity and cause that tissues remain firm. Tissue wrinkles caused by aging could be a consequence of decreasing residual stress and not counteracting the gravity. Considering these stresses has important application in maxillofacial surgery. It helps the surgeons to predict the changes after surgical operations and their consequences.

Keywords: growth, soft tissue, residual stress, finite element method

Procedia PDF Downloads 336
455 Review of Strategies for Hybrid Energy Storage Management System in Electric Vehicle Application

Authors: Kayode A. Olaniyi, Adeola A. Ogunleye, Tola M. Osifeko

Abstract:

Electric Vehicles (EV) appear to be gaining increasing patronage as a feasible alternative to Internal Combustion Engine Vehicles (ICEVs) for having low emission and high operation efficiency. The EV energy storage systems are required to handle high energy and power density capacity constrained by limited space, operating temperature, weight and cost. The choice of strategies for energy storage evaluation, monitoring and control remains a challenging task. This paper presents review of various energy storage technologies and recent researches in battery evaluation techniques used in EV applications. It also underscores strategies for the hybrid energy storage management and control schemes for the improvement of EV stability and reliability. The study reveals that despite the advances recorded in battery technologies there is still no cell which possess both the optimum power and energy densities among other requirements, for EV application. However combination of two or more energy storages as hybrid and allowing the advantageous attributes from each device to be utilized is a promising solution. The review also reveals that State-of-Charge (SoC) is the most crucial method for battery estimation. The conventional method of SoC measurement is however questioned in the literature and adaptive algorithms that include all model of disturbances are being proposed. The review further suggests that heuristic-based approach is commonly adopted in the development of strategies for hybrid energy storage system management. The alternative approach which is optimization-based is found to be more accurate but is memory and computational intensive and as such not recommended in most real-time applications.

Keywords: battery state estimation, hybrid electric vehicle, hybrid energy storage, state of charge, state of health

Procedia PDF Downloads 210
454 Reduction of Plutonium Production in Heavy Water Research Reactor: A Feasibility Study through Neutronic Analysis Using MCNPX2.6 and CINDER90 Codes

Authors: H. Shamoradifar, B. Teimuri, P. Parvaresh, S. Mohammadi

Abstract:

One of the main characteristics of Heavy Water Moderated Reactors is their high production of plutonium. This article demonstrates the possibility of reduction of plutonium and other actinides in Heavy Water Research Reactor. Among the many ways for reducing plutonium production in a heavy water reactor, in this research, changing the fuel from natural Uranium fuel to Thorium-Uranium mixed fuel was focused. The main fissile nucleus in Thorium-Uranium fuels is U-233 which would be produced after neutron absorption by Th-232, so the Thorium-Uranium fuels have some known advantages compared to the Uranium fuels. Due to this fact, four Thorium-Uranium fuels with different compositions ratios were chosen in our simulations; a) 10% UO2-90% THO2 (enriched= 20%); b) 15% UO2-85% THO2 (enriched= 10%); c) 30% UO2-70% THO2 (enriched= 5%); d) 35% UO2-65% THO2 (enriched= 3.7%). The natural Uranium Oxide (UO2) is considered as the reference fuel, in other words all of the calculated data are compared with the related data from Uranium fuel. Neutronic parameters were calculated and used as the comparison parameters. All calculations were performed by Monte Carol (MCNPX2.6) steady state reaction rate calculation linked to a deterministic depletion calculation (CINDER90). The obtained computational data showed that Thorium-Uranium fuels with four different fissile compositions ratios can satisfy the safety and operating requirements for Heavy Water Research Reactor. Furthermore, Thorium-Uranium fuels have a very good proliferation resistance and consume less fissile material than uranium fuels at the same reactor operation time. Using mixed Thorium-Uranium fuels reduced the long-lived α emitter, high radiotoxic wastes and the radio toxicity level of spent fuel.

Keywords: Heavy Water Reactor, Burn up, Minor Actinides, Neutronic Calculation

Procedia PDF Downloads 232
453 A Framework for Auditing Multilevel Models Using Explainability Methods

Authors: Debarati Bhaumik, Diptish Dey

Abstract:

Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.

Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics

Procedia PDF Downloads 71
452 Search for APN Permutations in Rings ℤ_2×ℤ_2^k

Authors: Daniel Panario, Daniel Santana de Freitas, Brett Stevens

Abstract:

Almost Perfect Nonlinear (APN) permutations with optimal resistance against differential cryptanalysis can be found in several domains. The permutation used in the standard for symmetric cryptography (the AES), for example, is based on a special kind of inversion in GF(28). Although very close to APN (2-uniform), this permutation still contains one number 4 in its differential spectrum, which means that, rigorously, it must be classified as 4-uniform. This fact motivates the search for fully APN permutations in other domains of definition. The extremely high complexity associated to this kind of problem precludes an exhaustive search for an APN permutation with 256 elements to be performed without the support of a suitable mathematical structure. On the other hand, in principle, there is nothing to indicate which mathematically structured domains can effectively help the search, and it is necessary to test several domains. In this work, the search for APN permutations in rings ℤ2×ℤ2k is investigated. After a full, exhaustive search with k=2 and k=3, all possible APN permutations in those rings were recorded, together with their differential profiles. Some very promising heuristics in these cases were collected so that, when used as a basis to prune backtracking for the same search in ℤ2×ℤ8 (search space with size 16! ≅244), just a few tenths of a second were enough to produce an APN permutation in a single CPU. Those heuristics were empirically extrapolated so that they could be applied to a backtracking search for APNs over ℤ2×ℤ16 (search space with size 32! ≅2117). The best permutations found in this search were further refined through Simulated Annealing, with a definition of neighbors suitable to this domain. The best result produced with this scheme was a 3-uniform permutation over ℤ2×ℤ16 with only 24 values equal to 3 in the differential spectrum (all the other 968 values were less than or equal 2, as it should be the case for an APN permutation). Although far from being fully APN, this result is technically better than a 4-uniform permutation and demanded only a few seconds in a single CPU. This is a strong indication that the use of mathematically structured domains, like the rings described in this work, together with heuristics based on smaller cases, can lead to dramatic cuts in the computational resources involved in the complexity of the search for APN permutations in extremely large domains.

Keywords: APN permutations, heuristic searches, symmetric cryptography, S-box design

Procedia PDF Downloads 136
451 Correction Factors for Soil-Structure Interaction Predicted by Simplified Models: Axisymmetric 3D Model versus Fully 3D Model

Authors: Fu Jia

Abstract:

The effects of soil-structure interaction (SSI) are often studied using axial-symmetric three-dimensional (3D) models to avoid the high computational cost of the more realistic, fully 3D models, which require 2-3 orders of magnitude more computer time and storage. This paper analyzes the error and presents correction factors for system frequency, system damping, and peak amplitude of structural response computed by axisymmetric models, embedded in uniform or layered half-space. The results are compared with those for fully 3D rectangular foundations of different aspect ratios. Correction factors are presented for a range of the model parameters, such as fixed-base frequency, structure mass, height and length-to-width ratio, foundation embedment, soil-layer stiffness and thickness. It is shown that the errors are larger for stiffer, taller and heavier structures, deeper foundations and deeper soil layer. For example, for a stiff structure like Millikan Library (NS response; length-to-width ratio 1), the error is 6.5% in system frequency, 49% in system damping and 180% in peak amplitude. Analysis of a case study shows that the NEHRP-2015 provisions for reduction of base shear force due to SSI effects may be unsafe for some structures and need revision. The presented correction factor diagrams can be used in practical design and other applications.

Keywords: 3D soil-structure interaction, correction factors for axisymmetric models, length-to-width ratio, NEHRP-2015 provisions for reduction of base shear force, rectangular embedded foundations, SSI system frequency, SSI system damping

Procedia PDF Downloads 240
450 An Integrated Framework for Wind-Wave Study in Lakes

Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung

Abstract:

The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.

Keywords: wave modelling, wind-wave, extreme value analysis, marina

Procedia PDF Downloads 62
449 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography

Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld

Abstract:

With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.

Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry

Procedia PDF Downloads 293
448 Numerical and Experimental Investigation of Distance Between Fan and Coil Block in a Fin and Tube Air Cooler Heat Exchanger

Authors: Feyza Şahi̇n, Harun Deni̇zli̇, Mustafa Zabun, Hüseyi̇n OnbaşIoğli

Abstract:

Heat exchangers are devices that are widely used to transfer heat between fluids due to their temperature differences. As a type of heat exchanger, air coolers are heat exchangers that cool the air as it passes through the fins of the heat exchanger by transferring heat to the refrigerant in the coil tubes of the heat exchanger. An assembled fin and tube heat exchanger consists of a coil block and a casing with a fan mounted on it. The term “Fan hood” is used to define the distance between the fan and the coil block. Air coolers play a crucial role in cooling systems, and their heat transfer performance can vary depending on design parameters. These parameters can be related to the air side or the internal fluid side. For airside efficiency, the distance between the fan and the coil block affects the performance by creating dead zones at the corners of the casing and maldistribution of airflow. Therefore, a detailed study of the effect of the fan hood on the evaporator and the optimum fan hood distance is necessary for an efficient air cooler design. This study aims to investigate the value of the fan hood in a fin and tube-type air cooler heat exchanger through computational fluid dynamics (CFD) simulations and experimental investigations. CFD simulations will be used to study the airflow within the fan hood. These simulations will provide valuable insights to optimize the design of the fan hood. In addition, experimental tests will be carried out to validate the CFD results and to measure the performance of the fan hood under real conditions. The results will help us to understand the effect of fan hood design on evaporator efficiency and contribute to the development of more efficient cooling systems. This study will provide essential information for evaporator design and improving the energy efficiency of cooling systems.

Keywords: heat exchanger, fan hood, heat exchanger performance, air flow performance

Procedia PDF Downloads 51
447 Assumption of Cognitive Goals in Science Learning

Authors: Mihail Calalb

Abstract:

The aim of this research is to identify ways for achieving sustainable conceptual understanding within science lessons. For this purpose, a set of teaching and learning strategies, parts of the theory of visible teaching and learning (VTL), is studied. As a result, a new didactic approach named "learning by being" is proposed and its correlation with educational paradigms existing nowadays in science teaching domain is analysed. In the context of VTL the author describes the main strategies of "learning by being" such as guided self-scaffolding, structuring of information, and recurrent use of previous knowledge or help seeking. Due to the synergy effect of these learning strategies applied simultaneously in class, the impact factor of learning by being on cognitive achievement of students is up to 93 % (the benchmark level is equal to 40% when an experienced teacher applies permanently the same conventional strategy during two academic years). The key idea in "learning by being" is the assumption by the student of cognitive goals. From this perspective, the article discusses the role of student’s personal learning effort within several teaching strategies employed in VTL. The research results emphasize that three mandatory student – related moments are present in each constructivist teaching approach: a) students’ personal learning effort, b) student – teacher mutual feedback and c) metacognition. Thus, a successful educational strategy will target to achieve an involvement degree of students into the class process as high as possible in order to make them not only know the learning objectives but also to assume them. In this way, we come to the ownership of cognitive goals or students’ deep intrinsic motivation. A series of approaches are inherent to the students’ ownership of cognitive goals: independent research (with an impact factor on cognitive achievement equal to 83% according to the results of VTL); knowledge of success criteria (impact factor – 113%); ability to reveal similarities and patterns (impact factor – 132%). Although it is generally accepted that the school is a public service, nonetheless it does not belong to entertainment industry and in most of cases the education declared as student – centered actually hides the central role of the teacher. Even if there is a proliferation of constructivist concepts, mainly at the level of science education research, we have to underline that conventional or frontal teaching, would never disappear. Research results show that no modern method can replace an experienced teacher with strong pedagogical content knowledge. Such a teacher will inspire and motivate his/her students to love and learn physics. The teacher is precisely the condensation point for an efficient didactic strategy – be it constructivist or conventional. In this way, we could speak about "hybridized teaching" where both the student and the teacher have their share of responsibility. In conclusion, the core of "learning by being" approach is guided learning effort that corresponds to the notion of teacher–student harmonic oscillator, when both things – guidance from teacher and student’s effort – are equally important.

Keywords: conceptual understanding, learning by being, ownership of cognitive goals, science learning

Procedia PDF Downloads 154
446 In silico Subtractive Genomics Approach for Identification of Strain-Specific Putative Drug Targets among Hypothetical Proteins of Drug-Resistant Klebsiella pneumoniae Strain 825795-1

Authors: Umairah Natasya Binti Mohd Omeershffudin, Suresh Kumar

Abstract:

Klebsiella pneumoniae, a Gram-negative enteric bacterium that causes nosocomial and urinary tract infections. Particular concern is the global emergence of multidrug-resistant (MDR) strains of Klebsiella pneumoniae. Characterization of antibiotic resistance determinants at the genomic level plays a critical role in understanding, and potentially controlling, the spread of multidrug-resistant (MDR) pathogens. In this study, drug-resistant Klebsiella pneumoniae strain 825795-1 was investigated with extensive computational approaches aimed at identifying novel drug targets among hypothetical proteins. We have analyzed 1099 hypothetical proteins available in genome. We have used in-silico genome subtraction methodology to design potential and pathogen-specific drug targets against Klebsiella pneumoniae. We employed bioinformatics tools to subtract the strain-specific paralogous and host-specific homologous sequences from the bacterial proteome. The sorted 645 proteins were further refined to identify the essential genes in the pathogenic bacterium using the database of essential genes (DEG). We found 135 unique essential proteins in the target proteome that could be utilized as novel targets to design newer drugs. Further, we identified 49 cytoplasmic protein as potential drug targets through sub-cellular localization prediction. Further, we investigated these proteins in the DrugBank databases, and 11 of the unique essential proteins showed druggability according to the FDA approved drug bank databases with diverse broad-spectrum property. The results of this study will facilitate discovery of new drugs against Klebsiella pneumoniae.

Keywords: pneumonia, drug target, hypothetical protein, subtractive genomics

Procedia PDF Downloads 160
445 Numerical Simulation of Different Configurations for a Combined Gasification/Carbonization Reactors

Authors: Mahmoud Amer, Ibrahim El-Sharkawy, Shinichi Ookawara, Ahmed Elwardany

Abstract:

Gasification and carbonization are two of the most common ways for biomass utilization. Both processes are using part of the waste to be accomplished, either by incomplete combustion or for heating for both gasification and carbonization, respectively. The focus of this paper is to minimize the part of the waste that is used for heating biomass for gasification and carbonization. This will occur by combining both gasifiers and carbonization reactors in a single unit to utilize the heat in the product biogas to heating up the wastes in the carbonization reactors. Three different designs are proposed for the combined gasification/carbonization (CGC) reactor. These include a parallel combination of two gasifiers and carbonized syngas, carbonizer and combustion chamber, and one gasifier, carbonizer, and combustion chamber. They are tested numerically using ANSYS Fluent Computational Fluid Dynamics to ensure homogeneity of temperature distribution inside the carbonization part of the CGC reactor. 2D simulations are performed for the three cases after performing both mesh-size and time-step independent solutions. The carbonization part is common among the three different cases, and the difference among them is how this carbonization reactor is heated. The simulation results showed that the first design could provide only partial homogeneous temperature distribution, not across the whole reactor. This means that the produced carbonized biomass will be reduced as it will only fill a specified height of the reactor. To keep the carbonized product production high, a series combination is proposed. This series configuration resulted in a uniform temperature distribution across the whole reactor as it has only one source for heat with no temperature distribution on any surface of the carbonization section. The simulations provided a satisfactory result that either the first parallel combination of gasifier and carbonization reactor could be used with a reduced carbonized amount or a series configuration to keep the production rate high.

Keywords: numerical simulation, carbonization, gasification, biomass, reactor

Procedia PDF Downloads 85
444 Hardware-In-The-Loop Relative Motion Control: Theory, Simulation and Experimentation

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper presents a Guidance and Control (G&C) strategy to address spacecraft maneuvering problem for future Rendezvous and Docking (RVD) missions. The proposed strategy allows safe and propellant efficient trajectories for space servicing missions including tasks such as approaching, inspecting and capturing. This work provides the validation test results of the G&C laws using a Hardware-In-the-Loop (HIL) setup with two robotic mockups representing the chaser and the target spacecraft. Through this paper, the challenges of the relative motion control in space are first summarized, and in particular, the constraints imposed by the mission, spacecraft and, onboard processing capabilities. Second, the proposed algorithm is introduced by presenting the formulation of constrained Model Predictive Control (MPC) to optimize the fuel consumption and explicitly handle the physical and geometric constraints in the system, e.g. thruster or Line-Of-Sight (LOS) constraints. Additionally, the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description and accordingly explained. The resulting convex optimization problem allows real-time implementation capability based on a detailed discussion on the computational time requirements and the obtained results with respect to the onboard computer and future trends of space processors capabilities. Finally, the performance of the algorithm is presented in the scope of a potential future mission and of the available equipment. The results also cover a comparison between the proposed algorithms with Linear–quadratic regulator (LQR) based control law to highlight the clear advantages of the MPC formulation.

Keywords: autonomous vehicles, embedded optimization, real-time experiment, rendezvous and docking, space robotics

Procedia PDF Downloads 105
443 CFD Modeling of Air Stream Pressure Drop inside Combustion Air Duct of Coal-Fired Power Plant with and without Airfoil

Authors: Pakawhat Khumkhreung, Yottana Khunatorn

Abstract:

The flow pattern inside rectangular intake air duct of 300 MW lignite coal-fired power plant is investigated in order to analyze and reduce overall inlet system pressure drop. The system consists of the 45-degree inlet elbow, the flow instrument, the 90-degree mitered elbow and fans, respectively. The energy loss in each section can be determined by Bernoulli’s equation and ASHRAE standard table. Hence, computational fluid dynamics (CFD) is used in this study based on Navier-Stroke equation and the standard k-epsilon turbulence modeling. Input boundary condition is 175 kg/s mass flow rate inside the 11-m2 cross sectional duct. According to the inlet air flow rate, the Reynolds number of airstream is 2.7x106 (based on the hydraulic duct diameter), thus the flow behavior is turbulence. The numerical results are validated with the real operation data. It is found that the numerical result agrees well with the operating data, and dominant loss occurs at the flow rate measurement device. Normally, the air flow rate is measured by the airfoil and it gets high pressure drop inside the duct. To overcome this problem, the airfoil is planned to be replaced with the other type measuring instrument, such as the average pitot tube which generates low pressure drop of airstream. The numerical result in case of average pitot tube shows that the pressure drop inside the inlet airstream duct is decreased significantly. It should be noted that the energy consumption of inlet air system is reduced too.

Keywords: airfoil, average pitot tube, combustion air, CFD, pressure drop, rectangular duct

Procedia PDF Downloads 144
442 Numerical Analysis of Laminar Reflux Condensation from Gas-Vapour Mixtures in Vertical Parallel Plate Channels

Authors: Foad Hassaninejadafarahani, Scott Ormiston

Abstract:

Reflux condensation occurs in a vertical channels and tubes when there is an upward core flow of vapor (or gas-vapor mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapor-gas mixture (or pure vapor) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapor core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on a finite volume method and a co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and pressure profiles, as well as axial variations of film thickness, Nusselt number and interface gas mass fraction.

Keywords: Reflux, Condensation, CFD-Two Phase, Nusselt number

Procedia PDF Downloads 349
441 Opposed Piston Engine Crankshaft Strength Calculation Using Finite Element Method

Authors: Konrad Pietrykowski, Michał Gęca, Michał Bialy

Abstract:

The paper presents the results of the crankshaft strength simulation. The crankshaft was taken from the opposed piston engine. Calculations were made using finite element method (FEM) in Abaqus software. This program allows to perform strength tests of individual machine parts as well as their assemblies. The crankshaft that was used in the calculations will be used in the two-stroke aviation research aircraft engine. The assumptions for the calculations were obtained from the AVL Boost software, from one-dimensional engine cycle model and from the multibody model using the method developed in the MSC Adams software. The research engine will be equipped with 3 combustion chambers and two crankshafts. In order to shorten the calculation time, only one crankcase analysis was performed. The cut of the shaft has been selected with the greatest forces resulting from the engine operation. Calculations were made for two cases. For maximum piston force when maximum bending load occurs and for the maximum torque. Cast iron material was adopted. For this material, Poisson's number, density, and Young's modulus were determined. The computational grid contained of 1,977,473 Tet elements. This type of elements was chosen because of the complex design of the crankshaft. Results are presented in the form of stress distributions maps and displacements on the surface and inside the geometry of the shaft. The results show the places of tension stresses, however, no stresses are exceeded at any place. The shaft can thus be applied to the engine in its present form. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK 'PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: aircraft diesel engine, crankshaft, finite element method, two-stroke engine

Procedia PDF Downloads 168
440 An Unbiased Profiling of Immune Repertoire via Sequencing and Analyzing T-Cell Receptor Genes

Authors: Yi-Lin Chen, Sheng-Jou Hung, Tsunglin Liu

Abstract:

Adaptive immune system recognizes a wide range of antigens via expressing a large number of structurally distinct T cell and B cell receptor genes. The distinct receptor genes arise from complex rearrangements called V(D)J recombination, and constitute the immune repertoire. A common method of profiling immune repertoire is via amplifying recombined receptor genes using multiple primers and high-throughput sequencing. This multiplex-PCR approach is efficient; however, the resulting repertoire can be distorted because of primer bias. To eliminate primer bias, 5’ RACE is an alternative amplification approach. However, the application of RACE approach is limited by its low efficiency (i.e., the majority of data are non-regular receptor sequences, e.g., containing intronic segments) and lack of the convenient tool for analysis. We propose a computational tool that can correctly identify non-regular receptor sequences in RACE data via aligning receptor sequences against the whole gene instead of only the exon regions as done in all other tools. Using our tool, the remaining regular data allow for an accurate profiling of immune repertoire. In addition, a RACE approach is improved to yield a higher fraction of regular T-cell receptor sequences. Finally, we quantify the degree of primer bias of a multiplex-PCR approach via comparing it to the RACE approach. The results reveal significant differences in frequency of VJ combination by the two approaches. Together, we provide a new experimental and computation pipeline for an unbiased profiling of immune repertoire. As immune repertoire profiling has many applications, e.g., tracing bacterial and viral infection, detection of T cell lymphoma and minimal residual disease, monitoring cancer immunotherapy, etc., our work should benefit scientists who are interested in the applications.

Keywords: immune repertoire, T-cell receptor, 5' RACE, high-throughput sequencing, sequence alignment

Procedia PDF Downloads 174
439 TutorBot+: Automatic Programming Assistant with Positive Feedback based on LLMs

Authors: Claudia Martínez-Araneda, Mariella Gutiérrez, Pedro Gómez, Diego Maldonado, Alejandra Segura, Christian Vidal-Castro

Abstract:

The purpose of this document is to showcase the preliminary work in developing an EduChatbot-type tool and measuring the effects of its use aimed at providing effective feedback to students in programming courses. This bot, hereinafter referred to as tutorBot+, was constructed based on chatGPT and is tasked with assisting and delivering timely positive feedback to students in the field of computer science at the Universidad Católica de Concepción. The proposed working method consists of four stages: (1) Immersion in the domain of Large Language Models (LLMs), (2) Development of the tutorBot+ prototype and integration, (3) Experiment design, and (4) Intervention. The first stage involves a literature review on the use of artificial intelligence in education and the evaluation of intelligent tutors, as well as research on types of feedback for learning and the domain of chatGPT. The second stage encompasses the development of tutorBot+, and the final stage involves a quasi-experimental study with students from the Programming and Database labs, where the learning outcome involves the development of computational thinking skills, enabling the use and measurement of the tool's effects. The preliminary results of this work are promising, as a functional chatBot prototype has been developed in both conversational and non-conversational versions integrated into an open-source online judge and programming contest platform system. There is also an exploration of the possibility of generating a custom model based on a pre-trained one tailored to the domain of programming. This includes the integration of the created tool and the design of the experiment to measure its utility.

Keywords: assessment, chatGPT, learning strategies, LLMs, timely feedback

Procedia PDF Downloads 48
438 A Combined CFD Simulation of Plateau Borders including Films and Transitional Areas of Liquid Foams

Authors: Abdolhamid Anazadehsayed, Jamal Naser

Abstract:

An integrated computational fluid dynamics model is developed for a combined simulation of Plateau borders, films, and transitional areas between the film and the Plateau borders to reduce the simplifications and shortcomings of available models for foam drainage in micro-scale. Additionally, the counter-flow related to the Marangoni effect in the transitional area is investigated. The results of this combined model show the contribution of the films, the exterior Plateau borders, and Marangoni flow in the drainage process more accurately since the inter-influence of foam's elements is included in this study. The exterior Plateau borders flow rate can be four times larger than the interior ones. The exterior bubbles can be more prominent in the drainage process in cases where the number of the exterior Plateau borders increases due to the geometry of container. The ratio of the Marangoni counter-flow to the Plateau border flow increases drastically with an increase in the mobility of air-liquid interface. However, the exterior bubbles follow the same trend with much less intensity since typically, the flow is less dependent on the interface of air-liquid in the exterior bubbles. Moreover, the Marangoni counter-flow in a near-wall transition area is less important than an internal one. The influence of air-liquid interface mobility on the average velocity of interior foams is attained with more accuracy with more realistic boundary condition. Then it has been compared with other numerical and analytical results. The contribution of films in the drainage is significant for the mobile foams as the velocity of flow in the film has the same order of magnitude as the velocity in the Plateau border. Nevertheless, for foams with rigid interfaces, film's contribution in foam drainage is insignificant, particularly for the films near the wall of the container.

Keywords: foam, plateau border, film, Marangoni, CFD, bubble

Procedia PDF Downloads 328
437 Advanced Analysis on Dissemination of Pollutant Caused by Flaring System Effect Using Computational Fluid Dynamics (CFD) Fluent Model with WRF Model Input in Transition Season

Authors: Benedictus Asriparusa

Abstract:

In the area of the oil industry, there is accompanied by associated natural gas. The thing shows that a large amount of energy is being wasted mostly in the developing countries by contributing to the global warming process. This research represents an overview of methods in Minas area employed by these researchers in PT. Chevron Pacific Indonesia to determine ways of measuring and reducing gas flaring and its emission drastically. It provides an approximation includes analytical studies, numerical studies, modeling, computer simulations, etc. Flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process will release emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the air and environment around the industrial area. Therefore, we need a simulation to create the pattern of the dissemination of pollutant. This research paper has being made to see trends in gas flaring model and current developments to predict dominant variable which gives impact to dissemination of pollutant. Fluent models used to simulate the distribution of pollutant gas coming out of the stack. While WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. This study condition focused on transition season in 2012 at Minas area. The goal of the simulation is looking for the exact time which is most influence towards dissemination of pollutants. The most influence factor divided into two main subjects. It is the quickest wind and the slowest wind. According to the simulation results, it can be seen that quickest wind moves to horizontal way and slowest wind moves to vertical way.

Keywords: flaring system, fluent model, dissemination of pollutant, transition season

Procedia PDF Downloads 359
436 Uniqueness of Fingerprint Biometrics to Human Dynasty: A Review

Authors: Siddharatha Sharma

Abstract:

With the advent of technology and machines, the role of biometrics in society is taking an important place for secured living. Security issues are the major concern in today’s world and continue to grow in intensity and complexity. Biometrics based recognition, which involves precise measurement of the characteristics of living beings, is not a new method. Fingerprints are being used for several years by law enforcement and forensic agencies to identify the culprits and apprehend them. Biometrics is based on four basic principles i.e. (i) uniqueness, (ii) accuracy, (iii) permanency and (iv) peculiarity. In today’s world fingerprints are the most popular and unique biometrics method claiming a social benefit in the government sponsored programs. A remarkable example of the same is UIDAI (Unique Identification Authority of India) in India. In case of fingerprint biometrics the matching accuracy is very high. It has been observed empirically that even the identical twins also do not have similar prints. With the passage of time there has been an immense progress in the techniques of sensing computational speed, operating environment and the storage capabilities and it has become more user convenient. Only a small fraction of the population may be unsuitable for automatic identification because of genetic factors, aging, environmental or occupational reasons for example workers who have cuts and bruises on their hands which keep fingerprints changing. Fingerprints are limited to human beings only because of the presence of volar skin with corrugated ridges which are unique to this species. Fingerprint biometrics has proved to be a high level authentication system for identification of the human beings. Though it has limitations, for example it may be inefficient and ineffective if ridges of finger(s) or palm are moist authentication becomes difficult. This paper would focus on uniqueness of fingerprints to the human beings in comparison to other living beings and review the advancement in emerging technologies and their limitations.

Keywords: fingerprinting, biometrics, human beings, authentication

Procedia PDF Downloads 304
435 Understanding Inhibitory Mechanism of the Selective Inhibitors of Cdk5/p25 Complex by Molecular Modeling Studies

Authors: Amir Zeb, Shailima Rampogu, Minky Son, Ayoung Baek, Sang H. Yoon, Keun W. Lee

Abstract:

Neurotoxic insults activate calpain, which in turn produces truncated p25 from p35. p25 forms hyperactivated Cdk5/p25 complex, and thereby induces severe neuropathological aberrations including hyperphosphorylated tau, neuroinflammation, apoptosis, and neuronal death. Inhibition of Cdk5/p25 complex alleviates aberrant phosphorylation of tau to mitigate AD pathology. PHA-793887 and Roscovitine have been investigated as selective inhibitors of Cdk5/p25 with IC50 values 5nM and 160nM, respectively, but their mechanistic studies remain unknown. Herein, computational simulations have explored the binding mode and interaction mechanism of PHA-793887 and Roscovitine with Cdk5/p25. Docking results suggested that PHA-793887 and Rsocovitine have occupied the ATP-binding site of Cdk5 and obtained highest docking (GOLD) score of 66.54 and 84.03, respectively. Furthermore, molecular dynamics (MD) simulation demonstrated that PHA-793887 and Roscovitine established stable RMSD of 1.09 Å and 1.48 Å with Cdk5/p25, respectively. Profiling of polar interactions suggested that each inhibitor formed hydrogen bonds (H-bond) with catalytic residues of Cdk5 and could remain stable throughout the molecular dynamics simulation. Additionally, binding free energy calculation by molecular mechanics/Poisson–Boltzmann surface area (MM/PBSA) suggested that PHA-793887 and Roscovitine had lowest binding free energies of -150.05 kJ/mol and -113.14 kJ/mol, respectively with Cdk5/p25. Free energy decomposition demonstrated that polar energy by H-bond between the Glu81 of Cdk5 and PHA-793887 is the essential factor to make PHA-793887 highly selective towards Cdk5/p25. Overall, this study provided substantial evidences to explore mechanistic interactions of the selective inhibitors of Cdk5/p25 and could be used as fundamental considerations in the development of structure-based selective inhibitors of Cdk5/p25.

Keywords: Cdk5/p25 inhibition, molecular modeling of Cdk5/p25, PHA-793887 and roscovitine, selective inhibition of Cdk5/p25

Procedia PDF Downloads 121
434 Comparative in silico and in vitro Study of N-(1-Methyl-2-Oxo-2-N-Methyl Anilino-Ethyl) Benzene Sulfonamide and Its Analogues as an Anticancer Agent

Authors: Pamita Awasthi, Kirna, Shilpa Dogra, Manu Vatsal, Ritu Barthwal

Abstract:

Doxorubicin, also known as adriamycin, is an anthracycline class of drug used in cancer chemotherapy. It is used in the treatment of non-Hodgkin’s lymphoma, multiple myeloma, acute leukemias, breast cancer, lung cancer, endometrium cancer and ovary cancers. It functions via intercalating DNA and ultimately killing cancer cells. The major side effects of doxorubicin are hair loss, myelosuppression, nausea & vomiting, oesophagitis, diarrhoea, heart damage and liver dysfunction. The minor modifications in the structure of compound exhibit large variation in the biological activity, has prompted us to carry out the synthesis of sulfonamide derivatives. Sulfonamide is an important feature with broad spectrum of biological activity such as antiviral, antifungal, diuretics, anti-inflammatory, antibacterial and anticancer activities. Structure of the synthesized compound N-(1-methyl-2-oxo-2-N-methyl anilino-ethyl)benzene sulfonamide confirmed by proton nuclear magnetic resonance (1H NMR),13C NMR, Mass and FTIR spectroscopic tools to assure the position of all protons and hence stereochemistry of the molecule. Further we have reported the binding potential of synthesized sulfonamide analogues in comparison to doxorubicin drug using Auto Dock 4.2 software. Computational binding energy (B.E.) and inhibitory constant (Ki) has been evaluated for the synthesized compound in comparison of doxorubicin against Poly (dA-dT).Poly (dA-dT) and Poly (dG-dC).Poly (dG-dC) sequences. The in vitro cytotoxic study against human breast cancer cell lines confirms the better anticancer activity of the synthesized compound over currently in use anticancer drug doxorubicin. The IC50 value of the synthesized compound is 7.12 µM where as for doxorubicin is 7.2 µ.

Keywords: Doxorubicin, auto dock, in silco, in vitro

Procedia PDF Downloads 399
433 Lightweight and Seamless Distributed Scheme for the Smart Home

Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro

Abstract:

Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.

Keywords: authentication, key-session, security, wireless sensors

Procedia PDF Downloads 303