Search results for: Whale Optimization Algorithm
745 YOLO-IR: Infrared Small Object Detection in High Noise Images
Authors: Yufeng Li, Yinan Ma, Jing Wu, Chengnian Long
Abstract:
Infrared object detection aims at separating small and dim target from clutter background and its capabilities extend beyond the limits of visible light, making it invaluable in a wide range of applications such as improving safety, security, efficiency, and functionality. However, existing methods are usually sensitive to the noise of the input infrared image, leading to a decrease in target detection accuracy and an increase in the false alarm rate in high-noise environments. To address this issue, an infrared small target detection algorithm called YOLO-IR is proposed in this paper to improve the robustness to high infrared noise. To address the problem that high noise significantly reduces the clarity and reliability of target features in infrared images, we design a soft-threshold coordinate attention mechanism to improve the model’s ability to extract target features and its robustness to noise. Since the noise may overwhelm the local details of the target, resulting in the loss of small target features during depth down-sampling, we propose a deep and shallow feature fusion neck to improve the detection accuracy. In addition, because the generalized Intersection over Union (IoU)-based loss functions may be sensitive to noise and lead to unstable training in high-noise environments, we introduce a Wasserstein-distance based loss function to improve the training of the model. The experimental results show that YOLO-IR achieves a 5.0% improvement in recall and a 6.6% improvement in F1-score over existing state-of-art model.Keywords: infrared small target detection, high noise, robustness, soft-threshold coordinate attention, feature fusion
Procedia PDF Downloads 73744 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models
Authors: Ethan James
Abstract:
Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina
Procedia PDF Downloads 181743 Development of Methotrexate Nanostructured Lipid Carriers for Topical Treatment of Psoriasis: Optimization, Evaluation, and in vitro Studies
Authors: Yogeeta O. Agrawal, Hitendra S. Mahajan, Sanjay J. Surana
Abstract:
Methotrexate is effective in controlling recalcitrant psoriasis when administered by the oral or parenteral route long-term. However, the systematic use of this drug may provoke any of a number of side effects, notably hepatotoxic effects. To reduce these effects, clinical studies have been done with topical MTx. It is useful in treating a number of cutaneous conditions, including psoriasis. A major problem in topical administration of MTx currently available in market is that the drug is hydrosoluble and is mostly in the dissociated form at physiological pH. Its capacity for passive diffusion is thus limited. Localization of MTx in effected layers of skin is likely to improve the role of topical dosage form of the drug as a supplementary to oral therapy for treatment of psoriasis. One of the possibilities for increasing the penetration of drugs through the skin is the use of Nanostructured lipid Carriers. The objective of the present study was to formulate and characterize Methotrexate loaded Nanostructured Lipid Carriers (MtxNLCs), to understand in vitro drug release and evaluate the role of the developed gel in the topical treatment of psoriasis. MtxNLCs were prepared by solvent diffusion technique using 3(2) full factorial design.The mean diameter and surface morphology of MtxNLC was evaluated. MtxNLCs were lyophilized and crystallinity of NLC was characterized by Differential Scanning Calorimtery (DSC) and powder X-Ray Diffraction (XRD). The NLCs were incorporated in 1% w/w Carbopol 934 P gel base and in vitro skin deposition studies in Human Cadaver Skin were conducted. The optimized MtxNLCs were spherical in shape, with average particle size of 253(±9.92)nm, zeta potential of -30.4 (±0.86) mV and EE of 53.12(±1.54)%. DSC and XRD data confirmed the formation of NLCs. Significantly higher deposition of Methotrexate was found in human cadaver skin from MtxNLC gel (71.52 ±1.23%) as compared to Mtx plain gel (54.28±1.02%). Findings of the studies suggest that there is significant improvement in therapeutic index in treatment of psoriasis by MTx-NLCs incorporated gel base developed in this investigation over plain drug gel currently available in the market.Keywords: methotrexate, psoriasis, NLCs, hepatotoxic effects
Procedia PDF Downloads 430742 Robust Segmentation of Salient Features in Automatic Breast Ultrasound (ABUS) Images
Authors: Lamees Nasser, Yago Diez, Robert Martí, Joan Martí, Ibrahim Sadek
Abstract:
Automated 3D breast ultrasound (ABUS) screening is a novel modality in medical imaging because of its common characteristics shared with other ultrasound modalities in addition to the three orthogonal planes (i.e., axial, sagittal, and coronal) that are useful in analysis of tumors. In the literature, few automatic approaches exist for typical tasks such as segmentation or registration. In this work, we deal with two problems concerning ABUS images: nipple and rib detection. Nipple and ribs are the most visible and salient features in ABUS images. Determining the nipple position plays a key role in some applications for example evaluation of registration results or lesion follow-up. We present a nipple detection algorithm based on color and shape of the nipple, besides an automatic approach to detect the ribs. In point of fact, rib detection is considered as one of the main stages in chest wall segmentation. This approach consists of four steps. First, images are normalized in order to minimize the intensity variability for a given set of regions within the same image or a set of images. Second, the normalized images are smoothed by using anisotropic diffusion filter. Next, the ribs are detected in each slice by analyzing the eigenvalues of the 3D Hessian matrix. Finally, a breast mask and a probability map of regions detected as ribs are used to remove false positives (FP). Qualitative and quantitative evaluation obtained from a total of 22 cases is performed. For all cases, the average and standard deviation of the root mean square error (RMSE) between manually annotated points placed on the rib surface and detected points on rib borders are 15.1188 mm and 14.7184 mm respectively.Keywords: Automated 3D Breast Ultrasound, Eigenvalues of Hessian matrix, Nipple detection, Rib detection
Procedia PDF Downloads 330741 An Ensemble System of Classifiers for Computer-Aided Volcano Monitoring
Authors: Flavio Cannavo
Abstract:
Continuous evaluation of the status of potentially hazardous volcanos plays a key role for civil protection purposes. The importance of monitoring volcanic activity, especially for energetic paroxysms that usually come with tephra emissions, is crucial not only for exposures to the local population but also for airline traffic. Presently, real-time surveillance of most volcanoes worldwide is essentially delegated to one or more human experts in volcanology, who interpret data coming from different kind of monitoring networks. Unfavorably, the high nonlinearity of the complex and coupled volcanic dynamics leads to a large variety of different volcanic behaviors. Moreover, continuously measured parameters (e.g. seismic, deformation, infrasonic and geochemical signals) are often not able to fully explain the ongoing phenomenon, thus making the fast volcano state assessment a very puzzling task for the personnel on duty at the control rooms. With the aim of aiding the personnel on duty in volcano surveillance, here we introduce a system based on an ensemble of data-driven classifiers to infer automatically the ongoing volcano status from all the available different kind of measurements. The system consists of a heterogeneous set of independent classifiers, each one built with its own data and algorithm. Each classifier gives an output about the volcanic status. The ensemble technique allows weighting the single classifier output to combine all the classifications into a single status that maximizes the performance. We tested the model on the Mt. Etna (Italy) case study by considering a long record of multivariate data from 2011 to 2015 and cross-validated it. Results indicate that the proposed model is effective and of great power for decision-making purposes.Keywords: Bayesian networks, expert system, mount Etna, volcano monitoring
Procedia PDF Downloads 246740 System Dietadhoc® - A Fusion of Human-Centred Design and Agile Development for the Explainability of AI Techniques Based on Nutritional and Clinical Data
Authors: Michelangelo Sofo, Giuseppe Labianca
Abstract:
In recent years, the scientific community's interest in the exploratory analysis of biomedical data has increased exponentially. Considering the field of research of nutritional biologists, the curative process, based on the analysis of clinical data, is a very delicate operation due to the fact that there are multiple solutions for the management of pathologies in the food sector (for example can recall intolerances and allergies, management of cholesterol metabolism, diabetic pathologies, arterial hypertension, up to obesity and breathing and sleep problems). In this regard, in this research work a system was created capable of evaluating various dietary regimes for specific patient pathologies. The system is founded on a mathematical-numerical model and has been created tailored for the real working needs of an expert in human nutrition using the human-centered design (ISO 9241-210), therefore it is in step with continuous scientific progress in the field and evolves through the experience of managed clinical cases (machine learning process). DietAdhoc® is a decision support system nutrition specialists for patients of both sexes (from 18 years of age) developed with an agile methodology. Its task consists in drawing up the biomedical and clinical profile of the specific patient by applying two algorithmic optimization approaches on nutritional data and a symbolic solution, obtained by transforming the relational database underlying the system into a deductive database. For all three solution approaches, particular emphasis has been given to the explainability of the suggested clinical decisions through flexible and customizable user interfaces. Furthermore, the system has multiple software modules based on time series and visual analytics techniques that allow to evaluate the complete picture of the situation and the evolution of the diet assigned for specific pathologies.Keywords: medical decision support, physiological data extraction, data driven diagnosis, human centered AI, symbiotic AI paradigm
Procedia PDF Downloads 23739 Low-Cost Image Processing System for Evaluating Pavement Surface Distress
Authors: Keerti Kembhavi, M. R. Archana, V. Anjaneyappa
Abstract:
Most asphalt pavement condition evaluation use rating frameworks in which asphalt pavement distress is estimated by type, extent, and severity. Rating is carried out by the pavement condition rating (PCR), which is tedious and expensive. This paper presents the development of a low-cost technique for image pavement distress analysis that permits the identification of pothole and cracks. The paper explores the application of image processing tools for the detection of potholes and cracks. Longitudinal cracking and pothole are detected using Fuzzy-C- Means (FCM) and proceeded with the Spectral Theory algorithm. The framework comprises three phases, including image acquisition, processing, and extraction of features. A digital camera (Gopro) with the holder is used to capture pavement distress images on a moving vehicle. FCM classifier and Spectral Theory algorithms are used to compute features and classify the longitudinal cracking and pothole. The Matlab2016Ra Image preparing tool kit utilizes performance analysis to identify the viability of pavement distress on selected urban stretches of Bengaluru city, India. The outcomes of image evaluation with the utilization semi-computerized image handling framework represented the features of longitudinal crack and pothole with an accuracy of about 80%. Further, the detected images are validated with the actual dimensions, and it is seen that dimension variability is about 0.46. The linear regression model y=1.171x-0.155 is obtained using the existing and experimental / image processing area. The R2 correlation square obtained from the best fit line is 0.807, which is considered in the linear regression model to be ‘large positive linear association’.Keywords: crack detection, pothole detection, spectral clustering, fuzzy-c-means
Procedia PDF Downloads 181738 Evaluation of the Effect of Lactose Derived Monosaccharide on Galactooligosaccharides Production by β-Galactosidase
Authors: Yenny Paola Morales Cortés, Fabián Rico Rodríguez, Juan Carlos Serrato Bermúdez, Carlos Arturo Martínez Riascos
Abstract:
Numerous benefits of galactooligosaccharides (GOS) as prebiotics have motivated the study of enzymatic processes for their production. These processes have special complexities due to several factors that make difficult high productivity, such as enzyme type, reaction medium pH, substrate concentrations and presence of inhibitors, among others. In the present work the production of galactooligosaccharides (with different degrees of polymerization: two, three and four) from lactose was studied. The study considers the formulation of a mathematical model that predicts the production of GOS from lactose using the enzyme β-galactosidase. The effect of pH in the reaction was studied. For that, phosphate buffer was used and with this was evaluated three pH values (6.0.6.5 and 7.0). Thus it was observed that at pH 6.0 the enzymatic activity insignificant. On the other hand, at pH 7.0 the enzymatic activity was approximately 27 times greater than at 6.5. The last result differs from previously reported results. Therefore, pH 7.0 was chosen as working pH. Additionally, the enzyme concentration was analyzed, which allowed observing that the effect of the concentration depends on the pH and the concentration was set for the following studies in 0.272 mM. Afterwards, experiments were performed varying the lactose concentration to evaluate its effects on the process and to generate the data for the adjustment of the mathematical model parameters. The mathematical model considers the reactions of lactose hydrolysis and transgalactosylation for the production of disaccharides and trisaccharides, with their inverse reactions. The production of tetrasaccharides was negligible and, because of that, it was not included in the model. The reaction was monitored by HPLC and for the quantitative analysis of the experimental data the Matlab programming language was used, including solvers for differential equations systems integration (ode15s) and nonlinear problems optimization (fminunc). The results confirm that the transgalactosylation and hydrolysis reactions are reversible, additionally inhibition by glucose and galactose is observed on the production of GOS. In relation to the production process of galactooligosaccharides, the results show that it is necessary to have high initial concentrations of lactose considering that favors the transgalactosylation reaction, while low concentrations favor hydrolysis reactions.Keywords: β-galactosidase, galactooligosaccharides, inhibition, lactose, Matlab, modeling
Procedia PDF Downloads 358737 Comparative Analysis of Data Gathering Protocols with Multiple Mobile Elements for Wireless Sensor Network
Authors: Bhat Geetalaxmi Jairam, D. V. Ashoka
Abstract:
Wireless Sensor Networks are used in many applications to collect sensed data from different sources. Sensed data has to be delivered through sensors wireless interface using multi-hop communication towards the sink. The data collection in wireless sensor networks consumes energy. Energy consumption is the major constraints in WSN .Reducing the energy consumption while increasing the amount of generated data is a great challenge. In this paper, we have implemented two data gathering protocols with multiple mobile sinks/elements to collect data from sensor nodes. First, is Energy-Efficient Data Gathering with Tour Length-Constrained Mobile Elements in Wireless Sensor Networks (EEDG), in which mobile sinks uses vehicle routing protocol to collect data. Second is An Intelligent Agent-based Routing Structure for Mobile Sinks in WSNs (IAR), in which mobile sinks uses prim’s algorithm to collect data. Authors have implemented concepts which are common to both protocols like deployment of mobile sinks, generating visiting schedule, collecting data from the cluster member. Authors have compared the performance of both protocols by taking statistics based on performance parameters like Delay, Packet Drop, Packet Delivery Ratio, Energy Available, Control Overhead. Authors have concluded this paper by proving EEDG is more efficient than IAR protocol but with few limitations which include unaddressed issues likes Redundancy removal, Idle listening, Mobile Sink’s pause/wait state at the node. In future work, we plan to concentrate more on these limitations to avail a new energy efficient protocol which will help in improving the life time of the WSN.Keywords: aggregation, consumption, data gathering, efficiency
Procedia PDF Downloads 497736 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies
Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong
Abstract:
To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.Keywords: neural network, travel characteristics analysis, transportation choice, travel sharing rate, traffic resource allocation
Procedia PDF Downloads 138735 Optimization of Sucrose Concentration, PH Level and Inoculum Size for Callus Proliferation and Anti-bacterial Potential of Stevia Rebaudiana Bertoni
Authors: Inayat Ur Rahman Arshad
Abstract:
Stevia rebaudiana B. is a shrubby perennial herb of Asteraceae family that possesses the unique ability of accumulative non caloric sweet Steviol Glycosides (SGs). The purpose of the study is to optimize sugar concentration, pH level and inoculum size for inducing the callus with optimum growth and efficient antibacterial potential. Three different experiments were conducted in which Callus explant from three-months-old already established callus of Stevia reabudiana of four different sizes were inoculated on Murashige and Skoog (MS) basal medium supplemented with five different sucrose concentration and pH adjusted at four different levels. Maximum callus induction 100, 87.5 and 85.33% was resulted in the medium supplemented with 30g/l sucrose, pH maintained at 5.5 and inoculated with 1.25g inoculum respectively. Similarly, the highest fresh weight 65.00, 75.50 and 50.53g/l were noted in medium fortified with 40g/l sucrose, inoculated 1.25g inoculum and 6.0 pH level respectively. However, the callus developed in medium containing 50g/l sucrose found highly antibacterial potent with 27.3 and 26.5mm inhibition zone against P. vulgaris and B. subtilize respectively. Similarly, the callus grown on medium inoculated with 1.00g inoculum resulted in maximum antibacterial potential against S. aureus and P. vulgaris with 25 and 23.72mm inhibition zones respectively. However, in the case of pH levels the medium maintained at 6.5pH showed maximum antibacterial activity against P. vulgaris, B.subtilis and E.coli with 27.9, 25 and 23.72mm respectively. The ethyl acetate extract of Stevia callus and leaves did not show antibacterial potential against Xanthomonas campestris and Clavebactor michiganensis. In the entire experiment the standard antibacterial agent Streptomycin showed the highest inhibition zones from the rest of the callus extract, however the pure DMSO (Dimethyl Sulfoxide) caused no inhibitory zone against any bacteria. From these findings it is concluded that among various levels sucrose at the rate of 40g L-1, pH 6.0 and inoculums 0.75g was found best for most of the growth and quality attributes including fresh weight, dry weight and antibacterial activities and therefore can be recommended for callus proliferation and antibacterial potential of Stevia rebaudianaKeywords: Steviol Glycosides, Skoog, Murashige, Clavebactor michiganensis
Procedia PDF Downloads 87734 Optimization of Sucrose Concentration, pH Level and Inoculum Size for Callus Proliferation and Anti-Bacterial Potential of Stevia rebaudiana Bertoni
Authors: Inayat Ur Rahman Arshad
Abstract:
Background: Stevia rebaudiana B. is a shrubby perennial herb of Asteraceae family that possesses the unique ability of accumulative non-caloric sweet steviol glycosides (SGs). Purpose: The purpose of the study is to optimize sugar concentration, pH level, and inoculum size for inducing the callus with optimum growth and efficient antibacterial potential. Method: Three different experiments were conducted in which Callus explant from three-months-old already established callus of Stevia reabudiana of four different sizes was inoculated on Murashige and Skoog (MS) basal medium supplemented with five different sucrose concentration and pH adjusted at four different levels. Results: Maximum callus induction 100, 87.5, and 85.33% resulted in the medium supplemented with 30 g/l sucrose, pH maintained at 5.5, and inoculated with 1.25g inoculum, respectively. Similarly, the highest fresh weights 65.00, 75.50, and 50.53 g/l were noted in a medium fortified with 40 g/l sucrose, inoculated 1.25g inoculum, and 6.0 pH level, respectively. However, the callus developed in a medium containing 50 g/l sucrose was found to be highly antibacterial potent with 27.3 and 26.5 mm inhibition zone against P. vulgaris and B. subtilis, respectively. Similarly, the callus grown on a medium inoculated with 1.00 g inoculum resulted in maximum antibacterial potential against S. aureus and P. vulgaris with 25 and 23.72 mm inhibition zone, respectively. However, in the case of pH levels, the medium maintained at 6.5 pH showed maximum antibacterial activity against P. vulgaris, B.subtilis, and E.coli with 27.9, 25, and 23.72 mm, respectively. The ethyl acetate extract of Stevia callus and leaves did not show antibacterial potential against Xanthomonas campestris and Clavebactor michiganensis. In the entire experiment, the standard antibacterial agent Streptomycin showed the highest inhibition zones among the rest of the callus extract; however, the pure dimethyl sulfoxide (DMSO) caused no inhibitory zone against any bacteria. Conclusion: From these findings, it is concluded that among various levels, sucrose @ 40 g L⁻¹, pH 6.0, and inoculums at 0.75 g were found best for most of the growth and quality attributes, including fresh weight, dry weight, and antibacterial activities and therefore can be recommended for callus proliferation and antibacterial potential of Stevia rebaudiana.Keywords: Stevia rebaudiana, Steviol Glycosides, callus, Xanthomonas campestris
Procedia PDF Downloads 82733 Classification on Statistical Distributions of a Complex N-Body System
Authors: David C. Ni
Abstract:
Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification
Procedia PDF Downloads 309732 Predictions for the Anisotropy in Thermal Conductivity in Polymers Subjected to Model Flows by Combination of the eXtended Pom-Pom Model and the Stress-Thermal Rule
Authors: David Nieto Simavilla, Wilco M. H. Verbeeten
Abstract:
The viscoelastic behavior of polymeric flows under isothermal conditions has been extensively researched. However, most of the processing of polymeric materials occurs under non-isothermal conditions and understanding the linkage between the thermo-physical properties and the process state variables remains a challenge. Furthermore, the cost and energy required to manufacture, recycle and dispose polymers is strongly affected by the thermo-physical properties and their dependence on state variables such as temperature and stress. Experiments show that thermal conductivity in flowing polymers is anisotropic (i.e. direction dependent). This phenomenon has been previously omitted in the study and simulation of industrially relevant flows. Our work combines experimental evidence of a universal relationship between thermal conductivity and stress tensors (i.e. the stress-thermal rule) with differential constitutive equations for the viscoelastic behavior of polymers to provide predictions for the anisotropy in thermal conductivity in uniaxial, planar, equibiaxial and shear flow in commercial polymers. A particular focus is placed on the eXtended Pom-Pom model which is able to capture the non-linear behavior in both shear and elongation flows. The predictions provided by this approach are amenable to implementation in finite elements packages, since viscoelastic and thermal behavior can be described by a single equation. Our results include predictions for flow-induced anisotropy in thermal conductivity for low and high density polyethylene as well as confirmation of our method through comparison with a number of thermoplastic systems for which measurements of anisotropy in thermal conductivity are available. Remarkably, this approach allows for universal predictions of anisotropy in thermal conductivity that can be used in simulations of complex flows in which only the most fundamental rheological behavior of the material has been previously characterized (i.e. there is no need for additional adjusting parameters other than those in the constitutive model). Accounting for polymers anisotropy in thermal conductivity in industrially relevant flows benefits the optimization of manufacturing processes as well as the mechanical and thermal performance of finalized plastic products during use.Keywords: anisotropy, differential constitutive models, flow simulations in polymers, thermal conductivity
Procedia PDF Downloads 182731 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study
Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin
Abstract:
Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)
Procedia PDF Downloads 601730 Rescaled Range Analysis of Seismic Time-Series: Example of the Recent Seismic Crisis of Alhoceima
Authors: Marina Benito-Parejo, Raul Perez-Lopez, Miguel Herraiz, Carolina Guardiola-Albert, Cesar Martinez
Abstract:
Persistency, long-term memory and randomness are intrinsic properties of time-series of earthquakes. The Rescaled Range Analysis (RS-Analysis) was introduced by Hurst in 1956 and modified by Mandelbrot and Wallis in 1964. This method represents a simple and elegant analysis which determines the range of variation of one natural property (the seismic energy released in this case) in a time interval. Despite the simplicity, there is complexity inherent in the property measured. The cumulative curve of the energy released in time is the well-known fractal geometry of a devil’s staircase. This geometry is used for determining the maximum and minimum value of the range, which is normalized by the standard deviation. The rescaled range obtained obeys a power-law with the time, and the exponent is the Hurst value. Depending on this value, time-series can be classified in long-term or short-term memory. Hence, an algorithm has been developed for compiling the RS-Analysis for time series of earthquakes by days. Completeness time distribution and locally stationarity of the time series are required. The interest of this analysis is their application for a complex seismic crisis where different earthquakes take place in clusters in a short period. Therefore, the Hurst exponent has been obtained for the seismic crisis of Alhoceima (Mediterranean Sea) of January-March, 2016, where at least five medium-sized earthquakes were triggered. According to the values obtained from the Hurst exponent for each cluster, a different mechanical origin can be detected, corroborated by the focal mechanisms calculated by the official institutions. Therefore, this type of analysis not only allows an approach to a greater understanding of a seismic series but also makes possible to discern different types of seismic origins.Keywords: Alhoceima crisis, earthquake time series, Hurst exponent, rescaled range analysis
Procedia PDF Downloads 321729 Predicting and Optimizing the Mechanical Behavior of a Flax Reinforced Composite
Authors: Georgios Koronis, Arlindo Silva
Abstract:
This study seeks to understand the mechanical behavior of a natural fiber reinforced composite (epoxy/flax) in more depth, utilizing both experimental and numerical methods. It is attempted to identify relationships between the design parameters and the product performance, understand the effect of noise factors and reduce process variations. Optimization of the mechanical performance of manufactured goods has recently been implemented by numerous studies for green composites. However, these studies are limited and have explored in principal mass production processes. It is expected here to discover knowledge about composite’s manufacturing that can be used to design artifacts that are of low batch and tailored to niche markets. The goal is to reach greater consistency in the performance and further understand which factors play significant roles in obtaining the best mechanical performance. A prediction of response function (in various operating conditions) of the process is modeled by the DoE. Normally, a full factorial designed experiment is required and consists of all possible combinations of levels for all factors. An analytical assessment is possible though with just a fraction of the full factorial experiment. The outline of the research approach will comprise of evaluating the influence that these variables have and how they affect the composite mechanical behavior. The coupons will be fabricated by the vacuum infusion process defined by three process parameters: flow rate, injection point position and fiber treatment. Each process parameter is studied at 2-levels along with their interactions. Moreover, the tensile and flexural properties will be obtained through mechanical testing to discover the key process parameters. In this setting, an experimental phase will be followed in which a number of fabricated coupons will be tested to allow for a validation of the design of the experiment’s setup. Finally, the results are validated by performing the optimum set of in a final set of experiments as indicated by the DoE. It is expected that after a good agreement between the predicted and the verification experimental values, the optimal processing parameter of the biocomposite lamina will be effectively determined.Keywords: design of experiments, flax fabrics, mechanical performance, natural fiber reinforced composites
Procedia PDF Downloads 204728 Context-Aware Point-Of-Interests Recommender Systems Using Integrated Sentiment and Network Analysis
Authors: Ho Yeon Park, Kyoung-Jae Kim
Abstract:
Recently, user’s interests for location-based social network service increases according to the advances of social web and location-based technologies. It may be easy to recommend preferred items if we can use user’s preference, context and social network information simultaneously. In this study, we propose context-aware POI (point-of-interests) recommender systems using location-based network analysis and sentiment analysis which consider context, social network information and implicit user’s preference score. We propose a context-aware POI recommendation system consisting of three sub-modules and an integrated recommendation system of them. First, we will develop a recommendation module based on network analysis. This module combines social network analysis and cluster-indexing collaboration filtering. Next, this study develops a recommendation module using social singular value decomposition (SVD) and implicit SVD. In this research, we will develop a recommendation module that can recommend preference scores based on the frequency of POI visits of user in POI recommendation process by using social and implicit SVD which can reflect implicit feedback in collaborative filtering. We also develop a recommendation module using them that can estimate preference scores based on the recommendation. Finally, this study will propose a recommendation module using opinion mining and emotional analysis using data such as reviews of POIs extracted from location-based social networks. Finally, we will develop an integration algorithm that combines the results of the three recommendation modules proposed in this research. Experimental results show the usefulness of the proposed model in relation to the recommended performance.Keywords: sentiment analysis, network analysis, recommender systems, point-of-interests, business analytics
Procedia PDF Downloads 250727 Optimizing Foaming Agents by Air Compression to Unload a Liquid Loaded Gas Well
Authors: Mhenga Agneta, Li Zhaomin, Zhang Chao
Abstract:
When velocity is high enough, gas can entrain fluid and carry to the surface, but as time passes by, velocity drops to a critical point where fluids will start to hold up in the tubing and cause liquid loading which prevents gas production and may lead to the death of the well. Foam injection is widely used as one of the methods to unload liquid. Since wells have different characteristics, it is not guaranteed that foam can be applied in all of them and bring successful results. This research presents a technology to optimize the efficiency of foam to unload liquid by air compression. Two methods are used to explain optimization; (i) mathematical formulas are used to solve and explain the myth of how density and critical velocity could be minimized when air is compressed into foaming agents, then the relationship between flow rates and pressure increase which would boost up the bottom hole pressure and increase the velocity to lift liquid to the surface. (ii) Experiments to test foam carryover capacity and stability as a function of time and surfactant concentration whereby three surfactants anionic sodium dodecyl sulfate (SDS), nonionic Triton 100 and cationic hexadecyltrimethylammonium bromide (HDTAB) were probed. The best foaming agents were injected to lift liquid loaded in a created vertical well model of 2.5 cm diameter and 390 cm high steel tubing covered by a transparent glass casing of 5 cm diameter and 450 cm high. The results show that, after injecting foaming agents, liquid unloading was successful by 75%; however, the efficiency of foaming agents to unload liquid increased by 10% with an addition of compressed air at a ratio of 1:1. Measured values and calculated values were compared and brought about ± 3% difference which is a good number. The successful application of the technology indicates that engineers and stakeholders could bring water flooded gas wells back to production with optimized results by firstly paying attention to the type of surfactants (foaming agents) used, concentration of surfactants, flow rates of the injected surfactants then compressing air to the foaming agents at a proper ratio.Keywords: air compression, foaming agents, gas well, liquid loading
Procedia PDF Downloads 135726 Phase Optimized Ternary Alloy Material for Gas Turbines
Authors: Mayandi Ramanathan
Abstract:
Gas turbine blades see the most aggressive thermal stress conditions within the engine, due to Turbine Entry Temperatures in the range of 1500 to 1600°C, but in synchronization with other functional components, they must readily deliver efficient performance, whilst incurring minimal overhaul and repair costs during its service life up to 5 million flying miles. The blades rotate at very high rotation rates and remove significant amount of thermal power from the gas stream. At high temperatures the major component failure mechanism is creep. During its service over time under high temperatures and loads, the blade will deform, lengthen and rupture. High strength and stiffness in the longitudinal direction up to elevated service temperatures are certainly the most needed properties of turbine blades. The proposed advanced Ti alloy material needs a process that provides strategic orientation of metallic ordering, uniformity in composition and high metallic strength. 25% Ta/(Al+Ta) ratio ensures TaAl3 phase formation, where as 51% Al/(Al+Ti) ratio ensures formation of α-Ti3Al and γ-TiAl mixed phases fand the three phase combination ensures minimal Al excess (~1.4% Al excess), unlike Ti-47Al-2Cr-2Nb which has significant excess Al (~5% Al excess) that could affect the service life of turbine blades. This presentation will involve the summary of additive manufacturing and heat treatment process conditions to fabricate turbine blade with Ti-43Al matrix alloyed with optimized amount of refractory Ta metal. Summary of thermo-mechanical test results such as high temperature tensile strength, creep strain rate, thermal expansion coefficient and fracture toughness will be presented. Improvement in service temperature of the turbine blades and corrosion resistance dependence on coercivity of the alloy material will be reported. Phase compositions will be quantified, and a summary of its correlation with creep strain rate will be presented.Keywords: gas turbine, aerospace, specific strength, creep, high temperature materials, alloys, phase optimization
Procedia PDF Downloads 181725 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models
Authors: Azadeh Jafari, Robert G. Owens
Abstract:
In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics
Procedia PDF Downloads 361724 Assessment of Land Use Land Cover Change-Induced Climatic Effects
Authors: Mahesh K. Jat, Ankan Jana, Mahender Choudhary
Abstract:
Rapid population and economic growth resulted in changes in large-scale land use land cover (LULC) changes. Changes in the biophysical properties of the Earth's surface and its impact on climate are of primary concern nowadays. Different approaches, ranging from location-based relationships or modelling earth surface - atmospheric interaction through modelling techniques like surface energy balance (SEB) are used in the recent past to examine the relationship between changes in Earth surface land cover and climatic characteristics like temperature and precipitation. A remote sensing-based model i.e., Surface Energy Balance Algorithm for Land (SEBAL), has been used to estimate the surface heat fluxes over Mahi Bajaj Sagar catchment (India) from 2001 to 2020. Landsat ETM and OLI satellite data are used to model the SEB of the area. Changes in observed precipitation and temperature, obtained from India Meteorological Department (IMD) have been correlated with changes in surface heat fluxes to understand the relative contributions of LULC change in changing these climatic variables. Results indicate a noticeable impact of LULC changes on climatic variables, which are aligned with respective changes in SEB components. Results suggest that precipitation increases at a rate of 20 mm/year. The maximum and minimum temperature decreases and increases at 0.007 ℃ /year and 0.02 ℃ /year, respectively. The average temperature increases at 0.009 ℃ /year. Changes in latent heat flux and sensible heat flux positively correlate with precipitation and temperature, respectively. Variation in surface heat fluxes influences the climate parameters and is an adequate reason for climate change. So, SEB modelling is helpful to understand the LULC change and its impact on climate.Keywords: LULC, sensible heat flux, latent heat flux, SEBAL, landsat, precipitation, temperature
Procedia PDF Downloads 116723 Environmental Risk Assessment of Mechanization Waste Collection Scheme in Tehran
Authors: Amin Padash, Javad Kazem Zadeh Khoiy, Hossein Vahidi
Abstract:
Purpose: The mechanization system for the urban services was implemented in Tehran City in the year 2004 to promote the collection of domestic wastes; in 2010, in order to achieve the objectives of the project of urban services mechanization and qualitative promotion and improve the urban living environment, sustainable development and optimization of the recyclable solid wastes collection systems as well as other dry and non-organic wastes and conformity of the same to the modern urban management methods regarding integration of the mechanized urban services contractors and recycling contractors and in order to better and more correct fulfillment of the waste separation and considering the success of the mechanization plan of the dry wastes in most of the modern countries. The aim of this research is analyzing of Environmental Risk Assessment of the mechanization waste collection scheme in Tehran. Case Study: Tehran, the capital of Iran, with the population of 8.2 million people, occupies 730 km land expanse, which is 4% of total area of country. Tehran generated 2,788,912 ton (7,641 ton/day) of waste in year 2008. Hospital waste generation rate in Tehran reaches 83 ton/day. Almost 87% of total waste was disposed of by placing in a landfill located in Kahrizak region. This large amount of waste causes a significant challenge for the city. Methodology: To conduct the study, the methodology proposed in the standard Mil-St-88213 is used. This method is an efficient method to examine the position in opposition to the various processes and the action is effective. The method is based on the method of Military Standard and Specialized in the military to investigate and evaluate options to locate and identify the strengths and weaknesses of powers to decide on the best determining strategy has been used. Finding and Conclusion: In this study, the current status of mechanization systems to collect waste and identify its possible effects on the environment through a survey and assessment methodology Mil-St-88213, and then the best plan for action and mitigation of environmental risk has been proposed as Environmental Management Plan (EMP).Keywords: environmental risk assessment, mechanization waste collection scheme, Mil-St-88213
Procedia PDF Downloads 439722 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 86721 Lead-Free Inorganic Cesium Tin-Germanium Triiodide Perovskites for Photovoltaic Application
Authors: Seyedeh Mozhgan Seyed-Talebi, Javad Beheshtian
Abstract:
The toxicity of lead associated with the lifecycle of perovskite solar cells (PSCs( is a serious concern which may prove to be a major hurdle in the path toward their commercialization. The current proposed lead-free PSCs including Ag(I), Bi(III), Sb(III), Ti(IV), Ge(II), and Sn(II) low-toxicity cations are still plagued with the critical issues of poor stability and low efficiency. This is mainly because of their chemical stability. In the present research, utilization of all inorganic CsSnGeI3 based materials offers the advantages to enhance resistance of device to degradation, reduce the cost of cells, and minimize the carrier recombination. The presence of inorganic halide perovskite improves the photovoltaic parameters of PCSs via improved surface coverage and stability. The inverted structure of simulated devices using a 1D simulator like solar cell capacitance simulator (SCAPS) version 3308 involves TCOHTL/Perovskite/ETL/Au contact layer. PEDOT:PSS, PCBM, and CsSnGeI3 used as hole transporting layer (HTL), electron transporting layer (ETL), and perovskite absorber layer in the inverted structure for the first time. The holes are injected from highly stable and air tolerant Sn0.5Ge0.5I3 perovskite composition to HTM and electrons from the perovskite to ETL. Simulation results revealed a great dependence of power conversion efficiency (PCE) on the thickness and defect density of perovskite layer. Here the effect of an increase in operating temperature from 300 K to 400 K on the performance of CsSnGeI3 based perovskite devices is investigated. Comparison between simulated CsSnGeI3 based PCSs and similar real testified devices with spiro-OMeTAD as HTL showed that the extraction of carriers at the interfaces of perovskite absorber depends on the energy level mismatches between perovskite and HTL/ETL. We believe that optimization results reported here represent a critical avenue for fabricating the stable, low-cost, efficient, and eco-friendly all-inorganic Cs-Sn-Ge based lead-free perovskite devices.Keywords: hole transporting layer, lead-free, perovskite solar cell, SCAPS-1D, Sn-Ge based
Procedia PDF Downloads 155720 Relative Entropy Used to Determine the Divergence of Cells in Single Cell RNA Sequence Data Analysis
Authors: An Chengrui, Yin Zi, Wu Bingbing, Ma Yuanzhu, Jin Kaixiu, Chen Xiao, Ouyang Hongwei
Abstract:
Single cell RNA sequence (scRNA-seq) is one of the effective tools to study transcriptomics of biological processes. Recently, similarity measurement of cells is Euclidian distance or its derivatives. However, the process of scRNA-seq is a multi-variate Bernoulli event model, thus we hypothesize that it would be more efficient when the divergence between cells is valued with relative entropy than Euclidian distance. In this study, we compared the performances of Euclidian distance, Spearman correlation distance and Relative Entropy using scRNA-seq data of the early, medial and late stage of limb development generated in our lab. Relative Entropy is better than other methods according to cluster potential test. Furthermore, we developed KL-SNE, an algorithm modifying t-SNE whose definition of divergence between cells Euclidian distance to Kullback–Leibler divergence. Results showed that KL-SNE was more effective to dissect cell heterogeneity than t-SNE, indicating the better performance of relative entropy than Euclidian distance. Specifically, the chondrocyte expressing Comp was clustered together with KL-SNE but not with t-SNE. Surprisingly, cells in early stage were surrounded by cells in medial stage in the processing of KL-SNE while medial cells neighbored to late stage with the process of t-SNE. This results parallel to Heatmap which showed cells in medial stage were more heterogenic than cells in other stages. In addition, we also found that results of KL-SNE tend to follow Gaussian distribution compared with those of the t-SNE, which could also be verified with the analysis of scRNA-seq data from another study on human embryo development. Therefore, it is also an effective way to convert non-Gaussian distribution to Gaussian distribution and facilitate the subsequent statistic possesses. Thus, relative entropy is potentially a better way to determine the divergence of cells in scRNA-seq data analysis.Keywords: Single cell RNA sequence, Similarity measurement, Relative Entropy, KL-SNE, t-SNE
Procedia PDF Downloads 340719 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis
Authors: Maher Ali Rusho, Sudipta Halder
Abstract:
The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.
Procedia PDF Downloads 12718 In-silico DFT Study, Molecular Docking, ADMET Predictions, and DMS of Isoxazolidine and Isoxazoline Analogs with Anticancer Properties
Authors: Moulay Driss Mellaoui, Khadija Zaki, Khalid Abbiche, Abdallah Imjjad, Rachid Boutiddar, Abdelouahid Sbai, Aaziz Jmiai, Souad El Issami, Al Mokhtar Lamsabhi, Hanane Zejli
Abstract:
This study presents a comprehensive analysis of six isoxazolidine and isoxazoline derivatives, leveraging a multifaceted approach that combines Density Functional Theory (DFT), AdmetSAR analysis, and molecular docking simulations to explore their electronic, pharmacokinetic, and anticancer properties. Through DFT analysis, using the B3LYP-D3BJ functional and the 6-311++G(d,p) basis set, we optimized molecular geometries, analyzed vibrational frequencies, and mapped Molecular Electrostatic Potentials (MEP), identifying key sites for electrophilic attacks and hydrogen bonding. Frontier Molecular Orbital (FMO) analysis and Density of States (DOS) plots revealed varying stability levels among the compounds, with 1b, 2b, and 3b showing slightly higher stability. Chemical potential assessments indicated differences in binding affinities, suggesting stronger potential interactions for compounds 1b and 2b. AdmetSAR analysis predicted favorable human intestinal absorption (HIA) rates for all compounds, highlighting compound 3b superior oral effectiveness. Molecular docking and molecular dynamics simulations were conducted on isoxazolidine and 4-isoxazoline derivatives targeting the EGFR receptor (PDB: 1JU6). Molecular docking simulations confirmed the high affinity of these compounds towards the target protein 1JU6, particularly compound 3b, among the isoxazolidine derivatives, compound 3b exhibited the most favorable binding energy, with a g score of -8.50 kcal/mol. Molecular dynamics simulations over 100 nanoseconds demonstrated the stability and potential of compound 3b as a superior candidate for anticancer applications, further supported by structural analyses including RMSD, RMSF, Rg, and SASA values. This study underscores the promising role of compound 3b in anticancer treatments, providing a solid foundation for future drug development and optimization efforts.Keywords: isoxazolines, DFT, molecular docking, molecular dynamic, ADMET, drugs.
Procedia PDF Downloads 46717 Use of Multivariate Statistical Techniques for Water Quality Monitoring Network Assessment, Case of Study: Jequetepeque River Basin
Authors: Jose Flores, Nadia Gamboa
Abstract:
A proper water quality management requires the establishment of a monitoring network. Therefore, evaluation of the efficiency of water quality monitoring networks is needed to ensure high-quality data collection of critical quality chemical parameters. Unfortunately, in some Latin American countries water quality monitoring programs are not sustainable in terms of recording historical data or environmentally representative sites wasting time, money and valuable information. In this study, multivariate statistical techniques, such as principal components analysis (PCA) and hierarchical cluster analysis (HCA), are applied for identifying the most significant monitoring sites as well as critical water quality parameters in the monitoring network of the Jequetepeque River basin, in northern Peru. The Jequetepeque River basin, like others in Peru, shows socio-environmental conflicts due to economical activities developed in this area. Water pollution by trace elements in the upper part of the basin is mainly related with mining activity, and agricultural land lost due to salinization is caused by the extensive use of groundwater in the lower part of the basin. Since the 1980s, the water quality in the basin has been non-continuously assessed by public and private organizations, and recently the National Water Authority had established permanent water quality networks in 45 basins in Peru. Despite many countries use multivariate statistical techniques for assessing water quality monitoring networks, those instruments have never been applied for that purpose in Peru. For this reason, the main contribution of this study is to demonstrate that application of the multivariate statistical techniques could serve as an instrument that allows the optimization of monitoring networks using least number of monitoring sites as well as the most significant water quality parameters, which would reduce costs concerns and improve the water quality management in Peru. Main socio-economical activities developed and the principal stakeholders related to the water management in the basin are also identified. Finally, water quality management programs will also be discussed in terms of their efficiency and sustainability.Keywords: PCA, HCA, Jequetepeque, multivariate statistical
Procedia PDF Downloads 355716 Malate Dehydrogenase Enabled ZnO Nanowires as an Optical Tool for Malic Acid Detection in Horticultural Products
Authors: Rana Tabassum, Ravi Kant, Banshi D. Gupta
Abstract:
Malic acid is an extensively distributed organic acid in numerous horticultural products in minute amounts which significantly contributes towards taste determination by balancing sugar and acid fractions. An enhanced concentration of malic acid is utilized as an indicator of fruit maturity. In addition, malic acid is also a crucial constituent of several cosmetics and pharmaceutical products. An efficient detection and quantification protocol for malic acid is thus highly demanded. In this study, we report a novel detection scheme for malic acid by synergistically collaborating fiber optic surface plasmon resonance (FOSPR) and distinctive features of nanomaterials favorable for sensing applications. The design blueprint involves the deposition of an assembly of malate dehydrogenase enzyme entrapped in ZnO nanowires forming the sensing route over silver coated central unclad core region of an optical fiber. The formation and subsequent decomposition of the enzyme-analyte complex on exposure of the sensing layer to malic acid solutions of diverse concentration results in modification of the dielectric function of the sensing layer which is manifested in terms of shift in resonance wavelength. Optimization of experimental variables such as enzyme concentration entrapped in ZnO nanowires, dip time of probe for deposition of sensing layer and working pH range of the sensing probe have been accomplished through SPR measurements. The optimized sensing probe displays high sensitivity, broad working range and a minimum limit of detection value and has been successfully tested for malic acid determination in real samples of fruit juices. The current work presents a novel perspective towards malic acid determination as the unique and cooperative combination of FOSPR and nanomaterials provides myriad advantages such as enhanced sensitivity, specificity, compactness together with the possibility of online monitoring and remote sensing.Keywords: surface plasmon resonance, optical fiber, sensor, malic acid
Procedia PDF Downloads 380