Search results for: efficient score function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11272

Search results for: efficient score function

8992 A Genetic Algorithm Approach for Multi Constraint Team Orienteering Problem with Time Windows

Authors: Uyanga Sukhbaatar, Ahmed Lbath, Mendamar Majig

Abstract:

The Orienteering Problem is the most known example to start modeling tourist trip design problem. In order to meet tourist’s interest and constraint the OP is becoming more and more complicate to solve. The Multi Constraint Team Orienteering Problem with Time Windows is the last extension of the OP which differentiates from other extensions by including more extra associated constraints. The goal of the MCTOPTW is maximizing tourist’s satisfaction score in same time not to violate any of these constraints. This paper presents a genetic algorithmic approach to tackle the MCTOPTW. The benchmark data from literature is tested by our algorithm and the performance results are compared.

Keywords: multi constraint team orienteering problem with time windows, genetic algorithm, tour planning system

Procedia PDF Downloads 622
8991 Rapid and Efficient Removal of Lead from Water Using Chitosan/Magnetite Nanoparticles

Authors: Othman M. Hakami, Abdul Jabbar Al-Rajab

Abstract:

Occurrence of heavy metals in water resources increased in the recent years albeit at low concentrations. Lead (PbII) is among the most important inorganic pollutants in ground and surface water. However, removal of this toxic metal efficiently from water is of public and scientific concern. In this study, we developed a rapid and efficient removal method of lead from water using chitosan/magnetite nanoparticles. A simple and effective process has been used to prepare chitosan/magnetite nanoparticles (NPs) (CS/Mag NPs) with effect on saturation magnetization value; the particles were strongly responsive to an external magnetic field making separation from solution possible in less than 2 minutes using a permanent magnet and the total Fe in solution was below the detection limit of ICP-OES (<0.19 mg L-1). The hydrodynamic particle size distribution increased from an average diameter of ~60 nm for Fe3O4 NPs to ~75 nm after chitosan coating. The feasibility of the prepared NPs for the adsorption and desorption of Pb(II) from water were evaluated using Chitosan/Magnetite NPs which showed a high removal efficiency for Pb(II) uptake, with 90% of Pb(II) removed during the first 5 minutes and equilibrium in less than 10 minutes. Maximum adsorption capacities for Pb(II) occurred at pH 6.0 and under room temperature were as high as 85.5 mg g-1, according to Langmuir isotherm model. Desorption of adsorbed Pb on CS/Mag NPs was evaluated using deionized water at different pH values ranged from 1 to 7 which was an effective eluent and did not result the destruction of NPs, then, they could subsequently be reused without any loss of their activity in further adsorption tests. Overall, our results showed the high efficiency of chitosan/magnetite nanoparticles (NPs) in lead removal from water in controlled conditions, and further studies should be realized in real field conditions.

Keywords: chitosan, magnetite, water, treatment

Procedia PDF Downloads 397
8990 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 181
8989 Using Dynamic Bayesian Networks to Characterize and Predict Job Placement

Authors: Xupin Zhang, Maria Caterina Bramati, Enrest Fokoue

Abstract:

Understanding the career placement of graduates from the university is crucial for both the qualities of education and ultimate satisfaction of students. In this research, we adapt the capabilities of dynamic Bayesian networks to characterize and predict students’ job placement using data from various universities. We also provide elements of the estimation of the indicator (score) of the strength of the network. The research focuses on overall findings as well as specific student groups including international and STEM students and their insight on the career path and what changes need to be made. The derived Bayesian network has the potential to be used as a tool for simulating the career path for students and ultimately helps universities in both academic advising and career counseling.

Keywords: dynamic bayesian networks, indicator estimation, job placement, social networks

Procedia PDF Downloads 368
8988 Sustainable Connectivity: Power-Line Communications for Home Automation in Ethiopia

Authors: Tsegahun Milkesa

Abstract:

This study investigates the implementation of Power-Line Communications (PLC) as a sustainable solution for home automation in Ethiopia. With the country's growing technological landscape and the quest for efficient energy use, this research explores the potential of PLC to facilitate smart home systems, aiming to enhance connectivity and energy management. The primary objective is to assess the feasibility and effectiveness of PLC in Ethiopian residences, considering factors such as infrastructure compatibility, reliability, and scalability. By analyzing existing PLC technologies and their adaptability to local contexts, this study aims to propose optimized solutions tailored to the Ethiopian environment. The research methodology involves a combination of literature review, field surveys, and experimental setups to evaluate PLC's performance in transmitting data and controlling various home appliances. Additionally, socioeconomic implications, including affordability and accessibility, are examined to ensure the technology's inclusivity in diverse Ethiopian households. The findings will contribute insights into the viability of PLC for sustainable connectivity in Ethiopian homes, shedding light on its potential to revolutionize energy-efficient and interconnected living spaces. Ultimately, this study seeks to pave the way for accessible and eco-friendly smart home solutions in Ethiopia, aligning with the nation's aspirations for technological advancement and sustainability.

Keywords: sustainable connectivity, power-line communications (PLC), home automation, Ethiopia, smart homes, energy efficiency, connectivity solutions, infrastructure development, sustainable living

Procedia PDF Downloads 72
8987 The Impact of Anxiety on the Access to Phonological Representations in Beginning Readers and Writers

Authors: Regis Pochon, Nicolas Stefaniak, Veronique Baltazart, Pamela Gobin

Abstract:

Anxiety is known to have an impact on working memory. In reasoning or memory tasks, individuals with anxiety tend to show longer response times and poorer performance. Furthermore, there is a memory bias for negative information in anxiety. Given the crucial role of working memory in lexical learning, anxious students may encounter more difficulties in learning to read and spell. Anxiety could even affect an earlier learning, that is the activation of phonological representations, which are decisive for the learning of reading and writing. The aim of this study is to compare the access to phonological representations of beginning readers and writers according to their level of anxiety, using an auditory lexical decision task. Eighty students of 6- to 9-years-old completed the French version of the Revised Children's Manifest Anxiety Scale and were then divided into four anxiety groups according to their total score (Low, Median-Low, Median-High and High). Two set of eighty-one stimuli (words and non-words) have been auditory presented to these students by means of a laptop computer. Stimuli words were selected according to their emotional valence (positive, negative, neutral). Students had to decide as quickly and accurately as possible whether the presented stimulus was a real word or not (lexical decision). Response times and accuracy were recorded automatically on each trial. It was anticipated a) longer response times for the Median-High and High anxiety groups in comparison with the two others groups, b) faster response times for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups, c) lower response accuracy for Median-High and High anxiety groups in comparison with the two others groups, d) better response accuracy for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups. Concerning the response times, our results showed no difference between the four groups. Furthermore, inside each group, the average response times was very close regardless the emotional valence. Otherwise, group differences appear when considering the error rates. Median-High and High anxiety groups made significantly more errors in lexical decision than Median-Low and Low groups. Better response accuracy, however, is not found for negative-valence words in comparison with positive and neutral-valence words in the Median-High and High anxiety groups. Thus, these results showed a lower response accuracy for above-median anxiety groups than below-median groups but without specificity for the negative-valence words. This study suggests that anxiety can negatively impact the lexical processing in young students. Although the lexical processing speed seems preserved, the accuracy of this processing may be altered in students with moderate or high level of anxiety. This finding has important implication for the prevention of reading and spelling difficulties. Indeed, during these learnings, if anxiety affects the access to phonological representations, anxious students could be disturbed when they have to match phonological representations with new orthographic representations, because of less efficient lexical representations. This study should be continued in order to precise the impact of anxiety on basic school learning.

Keywords: anxiety, emotional valence, childhood, lexical access

Procedia PDF Downloads 285
8986 Machine Learning Models for the Prediction of Heating and Cooling Loads of a Residential Building

Authors: Aaditya U. Jhamb

Abstract:

Due to the current energy crisis that many countries are battling, energy-efficient buildings are the subject of extensive research in the modern technological era because of growing worries about energy consumption and its effects on the environment. The paper explores 8 factors that help determine energy efficiency for a building: (relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, and glazing area distribution), with Tsanas and Xifara providing a dataset. The data set employed 768 different residential building models to anticipate heating and cooling loads with a low mean squared error. By optimizing these characteristics, machine learning algorithms may assess and properly forecast a building's heating and cooling loads, lowering energy usage while increasing the quality of people's lives. As a result, the paper studied the magnitude of the correlation between these input factors and the two output variables using various statistical methods of analysis after determining which input variable was most closely associated with the output loads. The most conclusive model was the Decision Tree Regressor, which had a mean squared error of 0.258, whilst the least definitive model was the Isotonic Regressor, which had a mean squared error of 21.68. This paper also investigated the KNN Regressor and the Linear Regression, which had to mean squared errors of 3.349 and 18.141, respectively. In conclusion, the model, given the 8 input variables, was able to predict the heating and cooling loads of a residential building accurately and precisely.

Keywords: energy efficient buildings, heating load, cooling load, machine learning models

Procedia PDF Downloads 90
8985 Grammatical Forms and Functions in Selected Political Interviews of Nigerian Presidential Aspirants in 2015 General Election

Authors: Temitope Abiodun Balogun

Abstract:

Political interviews are one of the ways by which political office-seekers in Nigeria sell themselves to the electorates. Extant studies have examined the discourse of political interviews from conversational, philosophical, rhetorical, stylistic and pragmatic perspectives with insufficient attention paid to grammatical forms and communicative intentions of the interviews granted by the two presidential aspirants in the 2015 Nigerian general election. This study fills this scholarly gap to unmask their grammatical forms and communicative styles, intention and credibility. The paper adopts Halliday’s Systemic Functional Grammar, specifically interpersonal function coupled with Searle’s Model of Speech Acts Theory as a theoretical framework. A total of six interviews granted by the two presidential aspirants in media serve as the source of data. It is discovered that, in most cases, politicians’ communicative intention is to “pull-down” their political opponents. While declarative and interrogatives are simple, direct and straightforward, the intention is to condemn, lambast and castigate their opponents. This communicative style does not allow the general populace to decipher the political manifestoes of the political aspirants and the party they represent. The paper recommends that before Nigeria can boast of any sustainable growth and development, there is the need for her political office-seekers to adopt effective communication strategies and styles to unveil their intention and manifestoes so that electorates can evaluate their performance after their tenure of office.

Keywords: general election, grammatical forms and function, political interviews, presidential aspirants

Procedia PDF Downloads 151
8984 Non-Convex Multi Objective Economic Dispatch Using Ramp Rate Biogeography Based Optimization

Authors: Susanta Kumar Gachhayat, S. K. Dash

Abstract:

Multi objective non-convex economic dispatch problems of a thermal power plant are of grave concern for deciding the cost of generation and reduction of emission level for diminishing the global warming level for improving green-house effect. This paper deals with ramp rate constraints for achieving better inequality constraints so as to incorporate valve point loading for cost of generation in thermal power plant through ramp rate biogeography based optimization involving mutation and migration. Through 50 out of 100 trials, the cost function and emission objective function were found to have outperformed other classical methods such as lambda iteration method, quadratic programming method and many heuristic methods like particle swarm optimization method, weight improved particle swarm optimization method, constriction factor based particle swarm optimization method, moderate random particle swarm optimization method etc. Ramp rate biogeography based optimization applications prove quite advantageous in solving non convex multi objective economic dispatch problems subjected to nonlinear loads that pollute the source giving rise to third harmonic distortions and other such disturbances.

Keywords: economic load dispatch, ELD, biogeography-based optimization, BBO, ramp rate biogeography-based optimization, RRBBO, valve-point loading, VPL

Procedia PDF Downloads 374
8983 Mastering Test Automation: Bridging Gaps for Seamless QA

Authors: Rohit Khankhoje

Abstract:

The rapid evolution of software development practices has given rise to an increasing demand for efficient and effective test automation. The paper titled "Mastering Test Automation: Bridging Gaps for Seamless QA" delves into the crucial aspects of test automation, addressing the obstacles faced by organizations in achieving flawless quality assurance. The paper highlights the importance of bridging knowledge gaps within organizations, emphasizing the necessity for management to acquire a deeper comprehension of test automation scenarios, coverage, report trends, and the importance of communication. To tackle these challenges, this paper introduces innovative solutions, including the development of an automation framework that seamlessly integrates with test cases and reporting tools like TestRail and Jira. This integration facilitates the automatic recording of bugs in Jira, enhancing bug reporting and communication between manual QA and automation teams as well as TestRail have all newly added automated testcases as soon as it is part of the automation suite. The paper demonstrates how this framework empowers management by providing clear insights into ongoing automation activities, bug origins, trend analysis, and test case specifics. "Mastering Test Automation" serves as a comprehensive guide for organizations aiming to enhance their quality assurance processes through effective test automation. It not only identifies the common pitfalls and challenges but also offers practical solutions to bridge the gaps, resulting in a more streamlined and efficient QA process.

Keywords: automation framework, API integration, test automation, test management tools

Procedia PDF Downloads 67
8982 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 138
8981 Biosorption of Lead (II) from Lead Acid Battery Industry Wastewater by Immobilized Dead Isolated Bacterial Biomass

Authors: Harikrishna Yadav Nanganuru, Narasimhulu Korrapati

Abstract:

Over the past many years, many sites in the world have been contaminated with heavy metals, which are the largest class of contaminants. Lead is one of the toxic heavy metals contaminated in the environment. Lead is not biodegradable, that’s why it is accumulated in the human body and impacts all the systems of the human body when it has been taken by humans. The accumulation of lead in the water environment has been showing adverse effects on the public health. So the removal of lead from the water environment by the biosorption process, which is emerged as a potential method for the lead removal, is an efficient approach. This work was focused to examine the removal of Lead [Pb (II)] ions from aqueous solution and effluent from battery industry. Lead contamination in water is a widespread problem throughout the world and mainly results from lead acid battery manufacturing effluent. In this work, isolated bacteria from wastewater of lead acid battery industry has been utilized for the removal of lead. First effluent from the lead acid battery industry was characterized by the inductively coupled plasma atomic emission spectrometry (ICP – AES). Then the bacteria was isolated from the effluent and used it’s immobilized dead mass for the biosorption of lead. Scanning electron microscopic (SEM) and Atomic force microscopy (AFM) studies clearly suggested that the Lead (Pb) was adsorbed efficiently. The adsorbed percentage of lead (II) from waste was 97.40 the concentration of lead (II) is measured by Atomic Absorption Spectroscopy (AAS). From the result of AAS it can be concluded that immobilized isolated dead mass was well efficient and useful for biosorption of lead contaminated waste water.

Keywords: biosorption, ICP-AES, lead (Pb), SEM

Procedia PDF Downloads 375
8980 Current Status and Influencing Factors of Transition Status of Newly Graduated Nurses in China: A Multi-center Cross-sectional Study

Authors: Jia Wang, Wanting Zhang, Yutong Xv, Zihan Guo, Weiguang Ma

Abstract:

Background: Before becoming qualified nurses, newly graduated nurses(NGNs) must experience a painful transition period, even transition shocks. Transition shocks are public health issues. To address the transition issue of NGNs, many programs or interventions have been developed and implemented. However, there are no studies to understand and assess the transition state of newly graduated nurses from work to life, from external abilities to internal emotions. Aims: Assess the transition status of newly graduated nurses in China. Identify the factors influencing the transition status of newly graduated nurses. Methods: The multi-center cross-sectional study design was adopted. From May 2022 to June 2023, 1261 newly graduated nurse in hospitals were surveyed online with the the Demographic Questionnaire and Transition Status Scale for Newly Graduated Nurses. SPSS 26.0 were used for data input and statistical analysis. Statistic description were adopted to evaluate the demographic characteristics and transition status of NGNs. Independent-samples T-test, Analysis of Variance and Multiple regression analysis was used to explore the influencing factors of transition status. Results: The total average score of Transition Status Scale for Newly Graduated Nurses was 4.00(SD = 0.61). Among the various dimensions of Transition Status, the highest dimension was competence for nursing work, while the lowest dimension was balance between work and life. The results showed factors influencing the transition status of NGNs include taught by senior nurses, night shift status, internship department, attribute of working hospital, province of work and residence, educational background, reasons for choosing nursing, types of hospital, and monthly income. Conclusion: At present, the transition status score of new nurses in China is relatively high, and NGNs are more likely to agree with their own transition status, especially the dimension of competence for nursing work. However, they have a poor level of excess in terms of life-work balance. Nursing managers should reasonably arrange the working hours of NGNs, promote their work-life balance, increase the salary and reward mechanism of NGNs, arrange experienced nursing mentors to teach, optimize the level of hospitals, provide suitable positions for NGNs with different educational backgrounds, pay attention to the culture shock of NGNs from other provinces, etc. Optimize human resource management by intervening in these factors that affect the transition of new nurses and promote a better transition of new nurses.

Keywords: newly graduated nurse, transition, humanistic car, nursing management, nursing practice education

Procedia PDF Downloads 78
8979 Ab Initio Calculations of Structure and Elastic Properties of BexZn1−xO Alloys

Authors: S. Lakel, F. Elhamra, M. Ibrir, K. Almi

Abstract:

There is a growing interest in Zn1-xBexO (ZBO)/ZnO hetero structures and quantum wells since the band gap energy of Zn1-xBexO solid solutions can be turned over a very large range (3.37–10.6 eV) as a function of the Be composition. ZBO/ZnO has been utilized in ultraviolet light emission diodes and lasers, and may find applications as active elements of various other electronic and optoelectronic devices. Band gap engineering by Be substitution enables the facile preparation of barrier layers and quantum wells in device structures. In addition, ZnO and its ternary alloys, as piezoelectric semiconductors, have been used for high-frequency surface acoustic wave devices in wireless communication systems due to their high acoustic velocities and large electromechanical coupling. However, many important parameters such as elastic constants, bulk modulus, Young’s modulus and band-gap bowing. First-principles calculations of the structural, electrical and elastic properties of Zn1-xBexO as a function of the Be concentration x have been performed within density functional theory using norm-conserving pseudopotentials and local density approximation (LDA) for the exchange and correlation energy. The alloys’ lattice constants may deviate from the Vegard law. As Be concentration increases, the elastic constants, the bulk modulus and Young’s modulus of the alloys increase, the band gap increases with increasing Be concentration and Zn1-xBexO alloys have direct band. Our calculated results are in good agreement with experimental data and other theoretical calculations.

Keywords: DFT calculation, norm-conserving pseudopotentials, ZnBeO alloys, ZnO

Procedia PDF Downloads 516
8978 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 121
8977 Euler-Bernoulli’s Approach for Buckling Analysis of Thick Rectangular Plates Using Alternative I Refined Theory

Authors: Owus Mathias Ibearugbulem

Abstract:

The study presents Euler-Bernoulli’s approach for buckling analysis of thick rectangular plates using alternative I refined theory. No earlier study, to the best knowledge of the author, based on the literature available to this research, applied Euler-Bernoulli’s approach in the alternative I refined theory for buckling analysis of thick rectangular plates. In this study, basic kinematics and constitutive relations for thick rectangular plates are employed in the differential equations of equilibrium of stresses in a deformable elemental body to obtain alternative I governing differential equations of thick rectangular plates and the corresponding compatibility equations. Solving these equations resulted in a general deflection function of a thick rectangular plate. Using this function and satisfying the boundary conditions of three plates, their peculiar deflection functions are obtained. Going further, the study determined the non-dimensional critical buckling loads of the six plates. Values of the non-dimensional critical buckling load from the present study are compared with those from a three-dimensional buckling analysis of a thick plate. The highest percentage difference recorded for the plates: all edges simply supported (ssss), all edges clamped (cccc) and adjacent edges clamped with the other edges simply supported (ccss) are 3.31%, 5.57% and 3.38% respectively.

Keywords: Euler-Bernoulli, buckling, alternative I, kinematics, constitutive relation, governing differential equation, compatibility equation, thick plate

Procedia PDF Downloads 20
8976 Biotech Processes to Recover Valuable Fraction from Buffalo Whey Usable in Probiotic Growth, Cosmeceutical, Nutraceutical and Food Industries

Authors: Alberto Alfano, Sergio D’ambrosio, Darshankumar Parecha, Donatella Cimini, Chiara Schiraldi.

Abstract:

The main objective of this study regards the setup of an efficient small-scale platform for the conversion of local renewable waste materials, such as whey, into added-value products, thereby reducing environmental impact and costs deriving from the disposal of processing waste products. The buffalo milk whey derived from the cheese-making process, called second cheese whey, is the main by-product of the dairy industry. Whey is the main and most polluting by-product obtained from cheese manufacturing consisting of lactose, lactic acid, proteins, and salts, making whey an added-value product. In Italy, and in particular, in the Campania region, soft cheese production needs a large volume of liquid waste, especially during late spring and summer. This project is part of a circular economy perspective focused on the conversion of potentially polluting and difficult to purify waste into a resource to be exploited, and it embodies the concept of the three “R”: reduce, recycle, and reuse. Special focus was paid to the production of health-promoting biomolecules and biopolymers, which may be exploited in different segments of the food and pharmaceutical industries. These biomolecules may be recovered through appropriate processes and reused in an attempt to obtain added value products. So, ultrafiltration and nanofiltration processes were performed to fractionate bioactive components starting from buffalo milk whey. In this direction, the present study focused on the implementation of a downstream process that converts waste generated from food and food processing industries into added value products with potential applications. Owing to innovative downstream and biotechnological processes, rather than a waste product may be considered a resource to obtain high added value products, such as food supplements (probiotics), cosmeceuticals, biopolymers, and recyclable purified water. Besides targeting gastrointestinal disorders, probiotics such as Lactobacilli have been reported to improve immunomodulation and protection of the host against infections caused by viral and bacterial pathogens. Interestingly, also inactivated microbial (probiotic) cells and their metabolic products, indicated as parabiotic and postbiotics, respectively, have a crucial role and act as mediators in the modulation of the host’s immune function. To boost the production of biomass (both viable and/or heat inactivated cells) and/or the synthesis of growth-related postbiotics, such as EPS, efficient and sustainable fermentation processes are necessary. Based on a “zero-waste” approach, wastes generated from local industries can be recovered and recycled to develop sustainable biotechnological processes to obtain probiotics as well as post and parabiotic, to be tested as bioactive compounds against gastrointestinal disorders. The results have shown it was possible to recover an ultrafiltration retentate with suitable characteristics to be used in skin dehydration, to perform films (i.e., packaging for food industries), or as a wound repair agent and a nanofiltration retentate to recover lactic acid and carbon sources (e.g., lactose, glucose..) used for microbial cultivation. On the side, the last goal is to obtain purified water that can be reused throughout the process. In fact, water reclamation and reuse provide a unique and viable opportunity to augment traditional water supplies, a key issue nowadays.

Keywords: biotech process, downstream process, probiotic growth, from waste to product, buffalo whey

Procedia PDF Downloads 64
8975 A Case Report of Aberrant Vascular Anatomy of the Deep Inferior Epigastric Artery Flap

Authors: Karissa Graham, Andrew Campbell-Lloyd

Abstract:

The deep inferior epigastric artery perforator flap (DIEP) is used to reconstruct large volumes of tissue. The DIEP flap is based on the deep inferior epigastric artery (DIEA) and vein. Accurate knowledge of the anatomy of these vessels allows for efficient dissection of the flap, minimal damage to surrounding tissue, and a well vascularized flap. A 54 year old lady was assessed for bilateral delayed autologous reconstruction with DIEP free flaps. The right DIEA was consistent with the described anatomy. The left DIEA had a vessel branching shortly after leaving the external iliac artery and before entering the muscle. This independent branch entered the muscle and had a long intramuscular course to the largest perforator. The main DIEA vessel demonstrated a type II branching pattern but had perforators that were too small to have a viable DIEP flap. There were no communicating arterial branches between the independent vessel and DIEA, however, there was one venous communication between them. A muscle sparing transverse rectus abdominis muscle flap was raised using the main periumbilical perforator from the independent vessel. Our case report demonstrated an unreported anatomical variant of the DIEA. A few anatomical variants have been described in the literature, including a unilateral absent DIEA and peritoneal-cutaneous perforators that had no connection to the DIEA. Doing a pre-operative CTA helps to identify these rare anatomical variations, which leads to safer, more efficient, and effective operating.

Keywords: aberrant anatomy, CT angiography, DIEP anatomy, free flap

Procedia PDF Downloads 128
8974 Borate Crosslinked Fracturing Fluids: Laboratory Determination of Rheology

Authors: Lalnuntluanga Hmar, Hardik Vyas

Abstract:

Hydraulic fracturing has become an essential procedure to break apart the rock and release the oil or gas which are trapped tightly in the rock by pumping fracturing fluids at high pressure down into the well. To open the fracture and to transport propping agent along the fracture, proper selection of fracturing fluids is the most crucial components in fracturing operations. Rheology properties of the fluids are usually considered the most important. Among various fracturing fluids, Borate crosslinked fluids have proved to be highly effective. Borate in the form of Boric Acid, borate ion is the most commonly use to crosslink the hydrated polymers and to produce very viscous gels that can stable at high temperature. Guar and HPG (Hydroxypropyl Guar) polymers are the most often used in these fluids. Borate gel rheology is known to be a function of polymer concentration, borate ion concentration, pH, and temperature. The crosslinking using Borate is a function of pH which means it can be formed or reversed simply by altering the pH of the fluid system. The fluid system was prepared by mixing base polymer with water at pH ranging between 8 to 11 and the optimum borate crosslinker efficiency was found to be pH of about 10. The rheology of laboratory prepared Borate crosslinked fracturing fluid was determined using Anton Paar Rheometer and Fann Viscometer. The viscosity was measured at high temperature ranging from 200ᵒF to 250ᵒF and pressures in order to partially stimulate the downhole condition. Rheological measurements reported that the crosslinking increases the viscosity, elasticity and thus fluid capability to transport propping agent.

Keywords: borate, crosslinker, Guar, Hydroxypropyl Guar (HPG), rheology

Procedia PDF Downloads 196
8973 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 117
8972 Prevalence of Cyp2d6 and Its Implications for Personalized Medicine in Saudi Arabs

Authors: Hamsa T. Tayeb, Mohammad A. Arafah, Dana M. Bakheet, Duaa M. Khalaf, Agnieszka Tarnoska, Nduna Dzimiri

Abstract:

Background: CYP2D6 is a member of the cytochrome P450 mixed-function oxidase system. The enzyme is responsible for the metabolism and elimination of approximately 25% of clinically used drugs, especially in breast cancer and psychiatric therapy. Different phenotypes have been described displaying alleles that lead to a complete loss of enzyme activity, reduced function (poor metabolizers – PM), hyperfunctionality (ultrarapid metabolizers–UM) and therefore drug intoxication or loss of drug effect. The prevalence of these variants may vary among different ethnic groups. Furthermore, the xTAG system has been developed to categorized all patients into different groups based on their CYP2D6 substrate metabolization. Aim of the study: To determine the prevalence of the different CYP2D6 variants in our population, and to evaluate their clinical relevance in personalized medicine. Methodology: We used the Luminex xMAP genotyping system to sequence 305 Saudi individuals visiting the Blood Bank of our Institution and determine which polymorphisms of CYP2D6 gene are prevalent in our region. Results: xTAG genotyping showed that 36.72% (112 out of 305 individuals) carried the CYP2D6_*2. Out of the 112 individuals with the *2 SNP, 6.23% had multiple copies of *2 SNP (19 individuals out of 305 individuals), resulting in an UM phenotype. About 33.44% carried the CYP2D6_*41, which leads to decreased activity of the CYP2D6 enzyme. 19.67% had the wild-type alleles and thus had normal enzyme function. Furthermore, 15.74% carried the CYP2D6_*4, which is the most common nonfunctional form of the CYP2D6 enzyme worldwide. 6.56% carried the CYP2D6_*17, resulting in decreased enzyme activity. Approximately 5.73% carried the CYP2D6_*10, consequently decreasing the enzyme activity, resulting in a PM phenotype. 2.30% carried the CYP2D6_*29, leading to decreased metabolic activity of the enzyme, and 2.30% carried the CYP2D6_*35, resulting in an UM phenotype, 1.64% had a whole-gene deletion CYP2D6_*5, thus resulting in the loss of CYP2D6 enzyme production, 0.66% carried the CYP2D6_*6 variant. One individual carried the CYP2D6_*3(B), producing an inactive form of the enzyme, which leads to decrease of enzyme activity, resulting in a PM phenotype. Finally, one individual carried the CYP2D6_*9, which decreases the enzyme activity. Conclusions: Our study demonstrates that different CYP2D6 variants are highly prevalent in ethnic Saudi Arabs. This finding sets a basis for informed genotyping for these variants in personalized medicine. The study also suggests that xTAG is an appropriate procedure for genotyping the CYP2D6 variants in personalized medicine.

Keywords: CYP2D6, hormonal breast cancer, pharmacogenetics, polymorphism, psychiatric treatment, Saudi population

Procedia PDF Downloads 569
8971 A Generative Pretrained Transformer-Based Question-Answer Chatbot and Phantom-Less Quantitative Computed Tomography Bone Mineral Density Measurement System for Osteoporosis

Authors: Mian Huang, Chi Ma, Junyu Lin, William Lu

Abstract:

Introduction: Bone health attracts more attention recently and an intelligent question and answer (QA) chatbot for osteoporosis is helpful for science popularization. With Generative Pretrained Transformer (GPT) technology developing, we build an osteoporosis corpus dataset and then fine-tune LLaMA, a famous open-source GPT foundation large language model(LLM), on our self-constructed osteoporosis corpus. Evaluated by clinical orthopedic experts, our fine-tuned model outperforms vanilla LLaMA on osteoporosis QA task in Chinese. Three-dimensional quantitative computed tomography (QCT) measured bone mineral density (BMD) is considered as more accurate than DXA for BMD measurement in recent years. We develop an automatic Phantom-less QCT(PL-QCT) that is more efficient for BMD measurement since no need of an external phantom for calibration. Combined with LLM on osteoporosis, our PL-QCT provides efficient and accurate BMD measurement for our chatbot users. Material and Methods: We build an osteoporosis corpus containing about 30,000 Chinese literatures whose titles are related to osteoporosis. The whole process is done automatically, including crawling literatures in .pdf format, localizing text/figure/table region by layout segmentation algorithm and recognizing text by OCR algorithm. We train our model by continuous pre-training with Low-rank Adaptation (LoRA, rank=10) technology to adapt LLaMA-7B model to osteoporosis domain, whose basic principle is to mask the next word in the text and make the model predict that word. The loss function is defined as cross-entropy between the predicted and ground-truth word. Experiment is implemented on single NVIDIA A800 GPU for 15 days. Our automatic PL-QCT BMD measurement adopt AI-associated region-of-interest (ROI) generation algorithm for localizing vertebrae-parallel cylinder in cancellous bone. Due to no phantom for BMD calibration, we calculate ROI BMD by CT-BMD of personal muscle and fat. Results & Discussion: Clinical orthopaedic experts are invited to design 5 osteoporosis questions in Chinese, evaluating performance of vanilla LLaMA and our fine-tuned model. Our model outperforms LLaMA on over 80% of these questions, understanding ‘Expert Consensus on Osteoporosis’, ‘QCT for osteoporosis diagnosis’ and ‘Effect of age on osteoporosis’. Detailed results are shown in appendix. Future work may be done by training a larger LLM on the whole orthopaedics with more high-quality domain data, or a multi-modal GPT combining and understanding X-ray and medical text for orthopaedic computer-aided-diagnosis. However, GPT model gives unexpected outputs sometimes, such as repetitive text or seemingly normal but wrong answer (called ‘hallucination’). Even though GPT give correct answers, it cannot be considered as valid clinical diagnoses instead of clinical doctors. The PL-QCT BMD system provided by Bone’s QCT(Bone’s Technology(Shenzhen) Limited) achieves 0.1448mg/cm2(spine) and 0.0002 mg/cm2(hip) mean absolute error(MAE) and linear correlation coefficient R2=0.9970(spine) and R2=0.9991(hip)(compared to QCT-Pro(Mindways)) on 155 patients in three-center clinical trial in Guangzhou, China. Conclusion: This study builds a Chinese osteoporosis corpus and develops a fine-tuned and domain-adapted LLM as well as a PL-QCT BMD measurement system. Our fine-tuned GPT model shows better capability than LLaMA model on most testing questions on osteoporosis. Combined with our PL-QCT BMD system, we are looking forward to providing science popularization and early morning screening for potential osteoporotic patients.

Keywords: GPT, phantom-less QCT, large language model, osteoporosis

Procedia PDF Downloads 64
8970 Clara Cell Secretory Protein 16 Serum Level Decreases in Patients with Non-Smoking-Related Chronic Obstructive Pulmonary Diseases (COPD)

Authors: Lian Wu, Mervyn Merrilees

Abstract:

Chronic Obstructive Pulmonary Disease (COPD) is a worldwide problem, characterized by irreversible and progressive airflow obstruction. In New Zealand, it is currently the 4th commonest cause of death and exacerbations of COPD are a frequent cause of admission to hospital. Serum levels of Clara cell secretory protein-16 (CC-16) are believed to represent Clara cell toxicity. More recently, CC-16 has been found to be associated with smoker COPD. It is produced almost exclusively by non-ciliated Clara cells in the airways, and its primary function is to protect the lungs against oxidative stress and carcinogenesis. After acute exposure to cigarette smoke, serum levels of CC-16 become elevated. CC16 is a potent natural immune-suppressor and anti-inflammatory agent. In vitro, CC16 inhibits both monocyte and polymorphonuclear neutrophils chemotaxis and phagocytosis. CC16 also inhibits fibroblast chemotaxis. However, the role of CC-16 in non-smoking related COPD is still not clear. In this study, we investigated serum CC-16 levels in non-smoking related COPD. Methods: We compared non-smoker patients with COPD (FEV1<60% of predicted, FEV1/FVC <0.7, n=100) and individuals with normal lung function FEV1≥ 80% of predicted and FEV1/FVC≥ 0.7, n=80). All subjects had no smoking history. CC-16 was measured by ELISA. Results and conclusion: Serum CC-16 levels are reduced in individuals with non-smoking related COPD, and there is a weak correlation with disease severity in non-smoking related COPD group compared to non-smoker controls.

Keywords: COPD, CC-16, ELISA, non-smoking-related COPD

Procedia PDF Downloads 378
8969 Improving Working Memory in School Children through Chess Training

Authors: Veena Easvaradoss, Ebenezer Joseph, Sumathi Chandrasekaran, Sweta Jain, Aparna Anna Mathai, Senta Christy

Abstract:

Working memory refers to a cognitive processing space where information is received, managed, transformed, and briefly stored. It is an operational process of transforming information for the execution of cognitive tasks in different and new ways. Many class room activities require children to remember information and mentally manipulate it. While the impact of chess training on intelligence and academic performance has been unequivocally established, its impact on working memory needs to be studied. This study, funded by the Cognitive Science Research Initiative, Department of Science & Technology, Government of India, analyzed the effect of one-year chess training on the working memory of children. A pretest–posttest with control group design was used, with 52 children in the experimental group and 50 children in the control group. The sample was selected from children studying in school (grades 3 to 9), which included both the genders. The experimental group underwent weekly chess training for one year, while the control group was involved in extracurricular activities. Working memory was measured by two subtests of WISC-IV INDIA. The Digit Span Subtest involves recalling a list of numbers of increasing length presented orally in forward and in reverse order, and the Letter–Number Sequencing Subtest involves rearranging jumbled alphabets and numbers presented orally following a given rule. Both tasks require the child to receive and briefly store information, manipulate it, and present it in a changed format. The Children were trained using Winning Moves curriculum, audio- visual learning method, hands-on- chess training and recording the games using score sheets, analyze their mistakes, thereby increasing their Meta-Analytical abilities. They were also trained in Opening theory, Checkmating techniques, End-game theory and Tactical principles. Pre equivalence of means was established. Analysis revealed that the experimental group had significant gains in working memory compared to the control group. The present study clearly establishes a link between chess training and working memory. The transfer of chess training to the improvement of working memory could be attributed to the fact that while playing chess, children evaluate positions, visualize new positions in their mind, analyze the pros and cons of each move, and choose moves based on the information stored in their mind. If working-memory’s capacity could be expanded or made to function more efficiently, it could result in the improvement of executive functions as well as the scholastic performance of the child.

Keywords: chess training, cognitive development, executive functions, school children, working memory

Procedia PDF Downloads 256
8968 Diagnosis of Induction Machine Faults by DWT

Authors: Hamidreza Akbari

Abstract:

In this paper, for detection of inclined eccentricity in an induction motor, time–frequency analysis of the stator startup current is carried out. For this purpose, the discrete wavelet transform is used. Data are obtained from simulations, using winding function approach. The results show the validity of the approach for detecting the fault and discriminating with respect to other faults.

Keywords: induction machine, fault, DWT, electric

Procedia PDF Downloads 345
8967 Efficient Depolymerization of Polyethylene terephthalate (PET) Using Bimetallic Catalysts

Authors: Akmuhammet Karayev, Hassam Mazhar, Mamdouh Al Harthi

Abstract:

Polyethylene terephthalate (PET) recycling stands as a pivotal solution in combating plastic pollution and fostering a circular economy. This study addresses the catalytic glycolysis of PET, a key step in its recycling process, using synthesized catalysts. Our focus lies in elucidating the catalytic mechanism, optimizing reaction kinetics, and enhancing reactor design for efficient PET conversion. We synthesized anionic clays tailored for PET glycolysis and comprehensively characterized them using XRD, FT-IR, BET, DSC, and TGA techniques, confirming their suitability as catalysts. Through systematic parametric studies, we optimized reaction conditions to achieve complete PET conversion to bis hydroxy ethylene terephthalate (BHET) with over 75% yield within 2 hours at 200°C, employing a minimal catalyst concentration of 0.5%. These results underscore the catalysts' exceptional efficiency and sustainability, positioning them as frontrunners in catalyzing PET recycling processes. Furthermore, we demonstrated the recyclability of the obtained BHETs by repolymerizing them back to PET without the need for a catalyst. Heating the BHETs in a distillation unit facilitated their conversion back to PET, highlighting the closed-loop potential of our recycling approach. Our work embodies a significant leap in catalytic glycolysis kinetics, driven by sustainable catalysts, offering rapid and high-impact PET conversion while minimizing environmental footprint. This breakthrough not only sets new benchmarks for efficiency in PET recycling but also exemplifies the pivotal role of catalysis and reaction engineering in advancing sustainable materials management.

Keywords: polymer recycling, catalysis, circular economy, glycolysis

Procedia PDF Downloads 30
8966 Empirical Orthogonal Functions Analysis of Hydrophysical Characteristics in the Shira Lake in Southern Siberia

Authors: Olga S. Volodko, Lidiya A. Kompaniets, Ludmila V. Gavrilova

Abstract:

The method of empirical orthogonal functions is the method of data analysis with a complex spatial-temporal structure. This method allows us to decompose the data into a finite number of modes determined by empirically finding the eigenfunctions of data correlation matrix. The modes have different scales and can be associated with various physical processes. The empirical orthogonal function method has been widely used for the analysis of hydrophysical characteristics, for example, the analysis of sea surface temperatures in the Western North Atlantic, ocean surface currents in the North Carolina, the study of tropical wave disturbances etc. The method used in this study has been applied to the analysis of temperature and velocity measurements in saline Lake Shira (Southern Siberia, Russia). Shira is a shallow lake with the maximum depth of 25 m. The lake Shira can be considered as a closed water site because of it has one small river providing inflow and but it has no outflows. The main factor that causes the motion of fluid is variable wind flows. In summer the lake is strongly stratified by temperature and saline. Long-term measurements of the temperatures and currents were conducted at several points during summer 2014-2015. The temperature has been measured with an accuracy of 0.1 ºC. The data were analyzed using the empirical orthogonal function method in the real version. The first empirical eigenmode accounts for 70-80 % of the energy and can be interpreted as temperature distribution with a thermocline. A thermocline is a thermal layer where the temperature decreases rapidly from the mixed upper layer of the lake to much colder deep water. The higher order modes can be interpreted as oscillations induced by internal waves. The currents measurements were recorded using Acoustic Doppler Current Profilers 600 kHz and 1200 kHz. The data were analyzed using the empirical orthogonal function method in the complex version. The first empirical eigenmode accounts for about 40 % of the energy and corresponds to the Ekman spiral occurring in the case of a stationary homogeneous fluid. Other modes describe the effects associated with the stratification of fluids. The second and next empirical eigenmodes were associated with dynamical modes. These modes were obtained for a simplified model of inhomogeneous three-level fluid at a water site with a flat bottom.

Keywords: Ekman spiral, empirical orthogonal functions, data analysis, stratified fluid, thermocline

Procedia PDF Downloads 133
8965 The Relationship between the Personality Traits and Self-Compassion with Psychological Well-Being in Iranian College Students

Authors: Abdolamir Gatezadeh, Rezvan K. A. Mohamamdi, Arash Jelodari

Abstract:

It has been well established that personality traits and self-compassion are associated with psychological well-being. Thus, the current research aimed to investigate the underlying mechanisms in a collectivist culture. Method: One hundred and fifty college students were chosen and filled out Ryff's Psychological Well-Being Scale, the NEO Personality Inventory, and Neff's Self-Compassion Scale. Results: The results of correlation analysis showed that there were significant relationships between the personality traits (neuroticism, extraversion, agreeableness, and conscientiousness) and self-compassion (self-kindness, isolation, mindfulness, and the total score of self-compassion) with psychological well-being. The regression analysis showed that neuroticism, extraversion, and conscientiousness significantly predicted psychological well-being. Discussion and conclusion: The cultural implications and future orientations have been discussed.

Keywords: college students, personality traits, psychological well-being, self-compassion

Procedia PDF Downloads 207
8964 Unleashing the Power of Cerebrospinal System for a Better Computer Architecture

Authors: Lakshmi N. Reddi, Akanksha Varma Sagi

Abstract:

Studies on biomimetics are largely developed, deriving inspiration from natural processes in our objective world to develop novel technologies. Recent studies are diverse in nature, making their categorization quite challenging. Based on an exhaustive survey, we developed categorizations based on either the essential elements of nature - air, water, land, fire, and space, or on form/shape, functionality, and process. Such diverse studies as aircraft wings inspired by bird wings, a self-cleaning coating inspired by a lotus petal, wetsuits inspired by beaver fur, and search algorithms inspired by arboreal ant path networks lend themselves to these categorizations. Our categorizations of biomimetic studies allowed us to define a different dimension of biomimetics. This new dimension is not restricted to inspiration from the objective world. It is based on the premise that the biological processes observed in the objective world find their reflections in our human bodies in a variety of ways. For example, the lungs provide the most efficient example for liquid-gas phase exchange, the heart exemplifies a very efficient pumping and circulatory system, and the kidneys epitomize the most effective cleaning system. The main focus of this paper is to bring out the magnificence of the cerebro-spinal system (CSS) insofar as it relates to our current computer architecture. In particular, the paper uses four key measures to analyze the differences between CSS and human- engineered computational systems. These are adaptability, sustainability, energy efficiency, and resilience. We found that the cerebrospinal system reveals some important challenges in the development and evolution of our current computer architectures. In particular, the myriad ways in which the CSS is integrated with other systems/processes (circulatory, respiration, etc) offer useful insights on how the human-engineered computational systems could be made more sustainable, energy-efficient, resilient, and adaptable. In our paper, we highlight the energy consumption differences between CSS and our current computational designs. Apart from the obvious differences in materials used between the two, the systemic nature of how CSS functions provides clues to enhance life-cycles of our current computational systems. The rapid formation and changes in the physiology of dendritic spines and their synaptic plasticity causing memory changes (ex., long-term potentiation and long-term depression) allowed us to formulate differences in the adaptability and resilience of CSS. In addition, the CSS is sustained by integrative functions of various organs, and its robustness comes from its interdependence with the circulatory system. The paper documents and analyzes quantifiable differences between the two in terms of the four measures. Our analyses point out the possibilities in the development of computational systems that are more adaptable, sustainable, energy efficient, and resilient. It concludes with the potential approaches for technological advancement through creation of more interconnected and interdependent systems to replicate the effective operation of cerebro-spinal system.

Keywords: cerebrospinal system, computer architecture, adaptability, sustainability, resilience, energy efficiency

Procedia PDF Downloads 89
8963 Providing Reliability, Availability and Scalability Support for Quick Assist Technology Cryptography on the Cloud

Authors: Songwu Shen, Garrett Drysdale, Veerendranath Mannepalli, Qihua Dai, Yuan Wang, Yuli Chen, David Qian, Utkarsh Kakaiya

Abstract:

Hardware accelerator has been a promising solution to reduce the cost of cloud data centers. This paper investigates the QoS enhancement of the acceleration of an important datacenter workload: the webserver (or proxy) that faces high computational consumption originated from secure sockets layer (SSL) or transport layer security (TLS) procession in the cloud environment. Our study reveals that for the accelerator maintenance cases—need to upgrade driver/firmware or hardware reset due to hardware hang; we still can provide cryptography services by switching to software during maintenance phase and then switching back to accelerator after maintenance. The switching is seamless to server application such as Nginx that runs inside a VM on top of the server. To achieve this high availability goal, we propose a comprehensive fallback solution based on Intel® QuickAssist Technology (QAT). This approach introduces an architecture that involves the collaboration between physical function (PF) and virtual function (VF), and collaboration among VF, OpenSSL, and web application Nginx. The evaluation shows that our solution could provide high reliability, availability, and scalability (RAS) of hardware cryptography service in a 7x24x365 manner in the cloud environment.

Keywords: accelerator, cryptography service, RAS, secure sockets layer/transport layer security, SSL/TLS, virtualization fallback architecture

Procedia PDF Downloads 149