Search results for: like function
615 Characterization of a Lipolytic Enzyme of Pseudomonas nitroreducens Isolated from Mealworm's Gut
Authors: Jung-En Kuan, Whei-Fen Wu
Abstract:
In this study, a symbiotic bacteria from yellow mealworm's (Tenebrio molitor) mid-gut was isolated with characteristics of growth on minimal-tributyrin medium. After a PCR-amplification of its 16s rDNA, the resultant nucleotide sequences were then analyzed by schemes of the phylogeny trees. Accordingly, it was designated as Pseudomonas nitroreducens D-01. Next, by searching the lipolytic enzymes in its protein data bank, one of those potential lipolytic α/β hydrolases was identified, again using PCR-amplification and nucleotide-sequencing methods. To construct an expression of this lipolytic gene in plasmids, the target-gene primers were then designed, carrying the C-terminal his-tag sequences. Using the vector pET21a, a recombinant lipolytic hydrolase D gene with his-tag nucleotides was successfully cloned into it, of which the lipolytic D gene is under a control of the T7 promoter. After transformation of the resultant plasmids into Eescherichia coli BL21 (DE3), an IPTG inducer was used for the induction of the recombinant proteins. The protein products were then purified by metal-ion affinity column, and the purified proteins were found capable of forming a clear zone on tributyrin agar plate. Shortly, its enzyme activities were determined by degradation of p-nitrophenyl ester(s), and the substantial yellow end-product, p-nitrophenol, was measured at O.D.405 nm. Specifically, this lipolytic enzyme efficiently targets p-nitrophenyl butyrate. As well, it shows the most reactive activities at 40°C, pH 8 in potassium phosphate buffer. In thermal stability assays, the activities of this enzyme dramatically drop when the temperature is above 50°C. In metal ion assays, MgCl₂ and NH₄Cl induce the enzyme activities while MnSO₄, NiSO₄, CaCl₂, ZnSO₄, CoCl₂, CuSO₄, FeSO₄, and FeCl₃ reduce its activities. Besides, NaCl has no effects on its enzyme activities. Most organic solvents decrease the activities of this enzyme, such as hexane, methanol, ethanol, acetone, isopropanol, chloroform, and ethyl acetate. However, its enzyme activities increase when DMSO exists. All the surfactants like Triton X-100, Tween 80, Tween 20, and Brij35 decrease its lipolytic activities. Using Lineweaver-Burk double reciprocal methods, the function of the enzyme kinetics were determined such as Km = 0.488 (mM), Vmax = 0.0644 (mM/min), and kcat = 3.01x10³ (s⁻¹), as well the total efficiency of kcat/Km is 6.17 x10³ (mM⁻¹/s⁻¹). Afterwards, based on the phylogenetic analyses, this lipolytic protein is classified to type IV lipase by its homologous conserved region in this lipase family.Keywords: enzyme, esterase, lipotic hydrolase, type IV
Procedia PDF Downloads 134614 Cellular Components of the Hemal Node of Egyptian Cattle
Authors: Amira E. Derbalah, Doaa M. Zaghloul
Abstract:
10 clinically healthy hemal nodes were collected from male bulls aged 2-3 years. Light microscopy revealed a capsule of connective tissue consisted mainly of collagen fiber surrounding hemal node, numerous erythrocytes were found in wide subcapsular sinus under the capsule. The parenchyma of the hemal node was divided into cortex and medulla. Diffused lymphocytes, and lymphoid follicles, having germinal centers were the main components of the cortex, while in the medulla there was wide medullary sinus, diffused lymphocytes and few lymphoid nodules. The area occupied with lymph nodules was larger than that occupied with non-nodular structure of lymphoid cords and blood sinusoids. Electron microscopy revealed the cellular components of hemal node including elements of circulating erythrocytes intermingled with lymphocytes, plasma cells, mast cells, reticular cells, macrophages, megakaryocytes and endothelial cells lining the blood sinuses. The lymphocytes were somewhat triangular in shape with cytoplasmic processes extending between adjacent erythrocytes. Nuclei were triangular to oval in shape, lightly stained with clear nuclear membrane indentation and clear nucleoli. The reticular cells were elongated in shape with cytoplasmic processes extending between adjacent lymphocytes, rough endoplasmic reticulum, ribosomes and few lysosomes were seen in their cytoplasm. Nucleus was elongated in shape with less condensed chromatin. Plasma cells were oval to irregular in shape with numerous dilated rough endoplasmic reticulum containing electron lucent material occupying the whole cytoplasm and few mitochondria were found. Nuclei were centrally located and oval in shape with heterochromatin emarginated and often clumped near the nuclear membrane. Occasionally megakaryocytes and mast cells were seen among lymphocytes. Megakaryocytes had multilobulated nucleus and free ribosomes often appearing as small aggregates in their cytoplasm, while mast cell had their characteristic electron dense granule in the cytoplasm, few electron lucent granules were found also, we conclude that, the main function of the hemal node of cattle is proliferation of lymphocytes. No role for plasma cell in erythrophagocytosis could be suggested.Keywords: cattle, electron microscopy, hemal node, histology, immune system
Procedia PDF Downloads 403613 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe
Authors: Elsadig Naseraddeen Ahmed Mohamed
Abstract:
In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon
Procedia PDF Downloads 178612 Impact of Alternative Fuel Feeding on Fuel Cell Performance and Durability
Authors: S. Rodosik, J. P. Poirot-Crouvezier, Y. Bultel
Abstract:
With the expansion of the hydrogen economy, Proton Exchange Membrane Fuel Cell (PEMFC) systems are often presented as promising energy converters suitable for transport applications. However, reaching a durability of 5000 h recommended by the U.S. Department of Energy and decreasing system cost are still major hurdles to their development. In order to increase the system efficiency and simplify the system without affecting the fuel cell lifetime, an architecture called alternative fuel feeding has been developed. It consists in a fuel cell stack divided into two parts, alternatively fed, implemented on a 5-kW system for real scale testing. The operation strategy can be considered close to Dead End Anode (DEA) with specific modifications to avoid water and nitrogen accumulation in the cells. The two half-stacks are connected in series to enable each stack to be alternatively fed. Water and nitrogen accumulated can be shifted from one half-stack to the other one according to the alternative feeding frequency. Thanks to the homogenization of water vapor along the stack, water management was improved. The operating conditions obtained at system scale are close to recirculation without the need of a pump or an ejector. In a first part, a performance comparison with the DEA strategy has been performed. At high temperature and low pressure (80°C, 1.2 bar), performance of alternative fuel feeding was higher, and the system efficiency increased. In a second part, in order to highlight the benefits of the architecture on the fuel cell lifetime, two durability tests, lasting up to 1000h, have been conducted. A test on the 5-kW system has been compared to a reference test performed on a test bench with a shorter stack, conducted with well-controlled operating parameters and flow-through hydrogen strategy. The durability test is based upon the Fuel Cell Dynamic Load Cycle (FC-DLC) protocol but adapted to the system limitations: without OCV steps and a maximum current density of 0.4 A/cm². In situ local measurements with a segmented S++® plate performed all along the tests, showed a more homogeneous distribution of the current density with alternative fuel feeding than in flow-through strategy. Tests performed in this work enabled the understanding of this architecture advantages and drawbacks. Alternative fuel feeding architecture appeared to be a promising solution to ensure the humidification function at the anode side with a simplified fuel cell system.Keywords: automotive conditions, durability, fuel cell system, proton exchange membrane fuel cell, stack architecture
Procedia PDF Downloads 143611 Bovine Sperm Capacitation Promoters: The Comparison between Serum and Non-serum Albumin originated from Fish
Authors: Haris Setiawan, Phongsakorn Chuammitri, Korawan Sringarm, Montira Intanon, Anucha Sathanawongs
Abstract:
Capacitation is a prerequisite to achieving sperm competency to penetrate the oocyte naturally occurring in vivo throughout the female reproductive tract and entangling secretory fluid and epithelial cells. One of the crucial compounds in the oviductal fluid which promotes capacitation is albumin, secreted in major concentrations. However, the difficulties in the collection and the inconsistency of the oviductal fluid composition throughout the estrous cycle have replaced its function with serum-based albumins such as bovine serum albumin (BSA). BSA has been primarily involved and evidenced for their stabilizing effect to maintain the acrosome intact during the capacitation process, modulate hyperactivation, and elevate the number of sperm bound to zona pellucida. Contrary to its benefits, the use of blood-derived products in the culture system is not sustainable and increases the risk of disease transmissions, such as Creutzfeldt-Jakob disease (CJD) and bovine spongiform encephalopathy (BSE). Moreover, it has been asserted that this substance is an aeroallergen that produces allergies and respiratory problems. In an effort to identify an alternative sustainable and non-toxic albumin source, the present work evaluated sperm reactions to a capacitation medium containing albumin derived from the flesh of the snakehead fish (Channa striata). Before examining the ability of this non-serum albumin to promote capacitation in bovine sperm, the presence of albumin was detected using bromocresol purple (BCP) at the level of 25% from snakehead fish extract. Following the SDS-PAGE and densitometric analysis, two major bands at 40 kDa and 47 kDa consisting of 57% and 16% of total protein loaded were detected as the potential albumin-related bands. Significant differences were observed in all kinematic parameters upon incubation in the capacitation medium. Moreover, consistently higher values were shown for the kinematic parameters related to hyperactivation, such as amplitude lateral head (ALH), velocity curve linear (VCL), and linearity (LIN) when sperm were treated with 3 mg/mL of snakehead fish albumin among other treatments. Likewise, substantial differences of higher acrosome intact presented in sperm upon incubation with various concentrations of snakehead fish albumin for 90 minutes, indicating that this level of snakehead fish albumin can be used to replace the bovine serum albumin. However, further study is highly required to purify the albumin from snakehead fish extract for more reliable findings.Keywords: capacitation promoter, snakehead fish, non-serum albumin, bovine sperm
Procedia PDF Downloads 115610 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data
Authors: Huinan Zhang, Wenjie Jiang
Abstract:
Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.Keywords: Artificial intelligence, deep learning, data mining, remote sensing
Procedia PDF Downloads 63609 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 109608 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model
Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho
Abstract:
Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem
Procedia PDF Downloads 297607 Efficiency and Equity in Italian Secondary School
Authors: Giorgia Zotti
Abstract:
This research comprehensively investigates the multifaceted interplay determining school performance, individual backgrounds, and regional disparities within the landscape of Italian secondary education. Leveraging data gleaned from the INVALSI 2021-2022 database, the analysis meticulously scrutinizes two fundamental distributions of educational achievements: the standardized Invalsi test scores and official grades in Italian and Mathematics, focusing specifically on final-year secondary school students in Italy. Applying a comprehensive methodology, the study initially employs Data Envelopment Analysis (DEA) to assess school performances. This methodology involves constructing a production function encompassing inputs (hours spent at school) and outputs (Invalsi scores in Italian and Mathematics, along with official grades in Italian and Math). The DEA approach is applied in both of its versions: traditional and conditional. The latter incorporates environmental variables such as school type, size, demographics, technological resources, and socio-economic indicators. Additionally, the analysis delves into regional disparities by leveraging the Theil Index, providing insights into disparities within and between regions. Moreover, in the frame of the inequality of opportunity theory, the study quantifies the inequality of opportunity in students' educational achievements. The methodology applied is the Parametric Approach in the ex-ante version, considering diverse circumstances like parental education and occupation, gender, school region, birthplace, and language spoken at home. Consequently, a Shapley decomposition is applied to understand how much each circumstance affects the outcomes. The outcomes of this comprehensive investigation unveil pivotal determinants of school performance, notably highlighting the influence of school type (Liceo) and socioeconomic status. The research unveils regional disparities, elucidating instances where specific schools outperform others in official grades compared to Invalsi scores, shedding light on the intricate nature of regional educational inequalities. Furthermore, it emphasizes a heightened inequality of opportunity within the distribution of Invalsi test scores in contrast to official grades, underscoring pronounced disparities at the student level. This analysis provides insights for policymakers, educators, and stakeholders, fostering a nuanced understanding of the complexities within Italian secondary education.Keywords: inequality, education, efficiency, DEA approach
Procedia PDF Downloads 76606 Characterization of Aerosol Droplet in Absorption Columns to Avoid Amine Emissions
Authors: Hammad Majeed, Hanna Knuutila, Magne Hilestad, Hallvard Svendsen
Abstract:
Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem.Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles. Results: As an example a droplet of initial size of 3 microns, initially containing a 5M MEA, solution is exposed to an atmosphere free of MEA. Composition of the gas phase and temperature is changing with respect to time throughout the absorber.Keywords: amine solvents, emissions, global climate change, simulation and modelling, aerosol generation
Procedia PDF Downloads 267605 Technical and Economic Potential of Partial Electrification of Railway Lines
Authors: Rafael Martins Manzano Silva, Jean-Francois Tremong
Abstract:
Electrification of railway lines allows to increase speed, power, capacity and energetic efficiency of rolling stocks. However, this process of electrification is complex and costly. An electrification project is not just about design of catenary. It also includes installation of structures around electrification, as substation installation, electrical isolation, signalling, telecommunication and civil engineering structures. France has more than 30,000 km of railways, whose only 53% are electrified. The others 47% of railways use diesel locomotive and represent only 10% of the circulation (tons.km). For this reason, a new type of electrification, less expensive than the usual, is requested to enable the modernization of these railways. One solution could be the use of hybrids trains. This technology opens up new opportunities for less expensive infrastructure development such as the partial electrification of railway lines. In a partially electrified railway, the power supply of theses hybrid trains could be made either by the catenary or by the on-board energy storage system (ESS). Thus, the on-board ESS would feed the energetic needs of the train along the non-electrified zones while in electrified zones, the catenary would feed the train and recharge the on-board ESS. This paper’s objective deals with the technical and economic potential identification of partial electrification of railway lines. This study provides different scenarios of electrification by replacing the most expensive places to electrify using on-board ESS. The target is to reduce the cost of new electrification projects, i.e. reduce the cost of electrification infrastructures while not increasing the cost of rolling stocks. In this study, scenarios are constructed in function of the electrification’s cost of each structure. The electrification’s cost varies considerably because of the installation of catenary support in tunnels, bridges and viaducts is much more expensive than in others zones of the railway. These scenarios will be used to describe the power supply system and to choose between the catenary and the on-board energy storage depending on the position of the train on the railway. To identify the influence of each partial electrification scenario in the sizing of the on-board ESS, a model of the railway line and of the rolling stock is developed for a real case. This real case concerns a railway line located in the south of France. The energy consumption and the power demanded at each point of the line for each power supply (catenary or on-board ESS) are provided at the end of the simulation. Finally, the cost of a partial electrification is obtained by adding the civil engineering costs of the zones to be electrified plus the cost of the on-board ESS. The study of the technical and economic potential ends with the identification of the most economically interesting scenario of electrification.Keywords: electrification, hybrid, railway, storage
Procedia PDF Downloads 432604 Gene Expression and Staining Agents: Exploring the Factors That Influence the Electrophoretic Properties of Fluorescent Proteins
Authors: Elif Tugce Aksun Tumerkan, Chris Lowe, Hannah Krupa
Abstract:
Fluorescent proteins are self-sufficient in forming chromophores with a visible wavelength from 3 amino acids sequence within their own polypeptide structure. This chromophore – a molecule that absorbs a photon of light and exhibits an energy transition equal to the energy of the absorbed photon. Fluorescent proteins (FPs) consisted of a chain of 238 amino acid residues and composed of 11 beta strands shaped in a cylinder surrounding an alpha helix structure. A better understanding of the system of the chromospheres and the increasing advance in protein engineering in recent years, the properties of FPs offers the potential for new applications. They have used sensors and probes in molecular biology and cell-based research that giving a chance to observe these FPs tagged cell localization, structural variation and movement. For clarifying functional uses of fluorescent proteins, electrophoretic properties of these proteins are one of the most important parameters. Sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) analysis is used for determining electrophoretic properties commonly. While there are many techniques are used for determining the functionality of protein-based research, SDS-PAGE analysis can only provide a molecular level assessment of the proteolytic fragments. Before SDS-PAGE analysis, fluorescent proteins need to successfully purified. Due to directly purification of the target, FPs is difficult from the animal, gene expression is commonly used which must be done by transformation with the plasmid. Furthermore, used gel within electrophoresis and staining agents properties have a key role. In this review, the different factors that have the impact on the electrophoretic properties of fluorescent proteins explored. Fluorescent protein separation and purification are the essential steps before electrophoresis that should be done very carefully. For protein purification, gene expression process and following steps have a significant function. For successful gene expression, the properties of selected bacteria for expression, used plasmid are essential. Each bacteria has own characteristics which are very sensitive to gene expression, also used procedure is the important factor for fluorescent protein expression. Another important factors are gel formula and used staining agents. Gel formula has an effect on the specific proteins mobilization and staining with correct agents is a key step for visualization of electrophoretic bands of protein. Visuality of proteins can be changed depending on staining reagents. Apparently, this review has emphasized that gene expression and purification have a stronger effect than electrophoresis protocol and staining agents.Keywords: cell biology, gene expression, staining agents, SDS-page
Procedia PDF Downloads 195603 Synthesis, Physicochemical Characterization and Study of the Antimicrobial Activity of Chlorobutanol
Authors: N. Hadhoum, B. Guerfi, T. M. Sider, Z. Yassa, T. Djerboua, M. Boursouti, M. Mamou, F. Z. Hadjadj Aoul, L. R. Mekacher
Abstract:
Introduction and objectives: Chlorobutanol is a raw material, mainly used as an antiseptic and antimicrobial preservative in injectable and ophthalmic preparations. The main objective of our study was the synthesis and evaluation of the antimicrobial activity of chlorobutanol hemihydrates. Material and methods: Chlorobutanol was synthesized according to the nucleophilic addition reaction of chloroform to acetone, identified by an infrared absorption using Spectrum One FTIR spectrometer, melting point, Scanning electron microscopy and colorimetric reactions. The dosage of carvedilol active substance was carried out by assaying the degradation products of chlorobutanol in a basic solution. The chlorobutanol obtained was subjected to bacteriological tests in order to study its antimicrobial activity. The antibacterial activity was evaluated against strains such as Escherichia coli (ATCC 25 922), Staphylococcus aureus (ATCC 25 923) and Pseudomonas aeroginosa (ATCC = American type culture collection). The antifungal activity was evaluated against human pathogenic fungal strains, such as Candida albicans and Aspergillus niger provided by the parasitology laboratory of the Hospital of Tizi-Ouzou, Algeria. Results and discussion: Chlorobutanol was obtained in an acceptable yield. The characterization tests of the product obtained showed a white and crystalline appearance (confirmed by scanning electron microscopy), solubilities (in water, ethanol and glycerol), and a melting temperature in accordance with the requirements of the European pharmacopoeia. The colorimetric reactions were directed towards the presence of a trihalogenated carbon and an alcohol function. The spectral identification (IR) showed the presence of characteristic chlorobutanol peaks and confirmed the structure of the latter. The microbiological study revealed an antimicrobial effect on all strains tested (Sataphylococcus aureus (MIC = 1250 µg/ml), E. coli (MIC = 1250 µg/ml), Pseudomonas aeroginosa (MIC = 1250 µg/ml), Candida albicans (MIC =2500 µg/ml), Aspergillus niger (MIC =2500 µg/ml)) with MIC values close to literature data. Conclusion: Thus, on the whole, the synthesized chlorobutanol satisfied the requirements of the European Pharmacopoeia, and possesses antibacterial and antifungal activity; nevertheless, it is necessary to insist on the purification step of the product in order to eliminate the maximum impurities.Keywords: antimicrobial agent, bacterial and fungal strains, chlorobutanol, MIC, minimum inhibitory concentration
Procedia PDF Downloads 169602 Non-Cytotoxic Natural Sourced Inorganic Hydroxyapatite (HAp) Scaffold Facilitate Bone-like Mechanical Support and Cell Proliferation
Authors: Sudip Mondal, Biswanath Mondal, Sudit S. Mukhopadhyay, Apurba Dey
Abstract:
Bioactive materials improve devices for a long lifespan but have mechanical limitations. Mechanical characterization is one of the very important characteristics to evaluate the life span and functionality of the scaffold material. After implantation of scaffold material the primary stage rejection of scaffold occurs due to non biocompatible effect of host body system. The second major problems occur due to the effect of mechanical failure. The mechanical and biocompatibility failure of the scaffold materials can be overcome by the prior evaluation of the scaffold materials. In this study chemically treated Labeo rohita scale is used for synthesizing hydroxyapatite (HAp) biomaterial. Thermo-gravimetric and differential thermal analysis (TG-DTA) is carried out to ensure thermal stability. The chemical composition and bond structures of wet ball-milled calcined HAp powder is characterized by Fourier Transform Infrared spectroscopy (FTIR), X-ray Diffraction (XRD), Field Emission Scanning Electron Microscopy (FE-SEM), Transmission Electron Microscopy (TEM), Energy Dispersive X-ray (EDX) analysis. Fish scale derived apatite materials consists of nano-sized particles with Ca/P ratio of 1.71. The biocompatibility through cytotoxicity evaluation and MTT assay are carried out in MG63 osteoblast cell lines. In the cell attachment study, the cells are tightly attached with HAp scaffolds developed in the laboratory. The result clearly suggests that HAp material synthesized in this study do not have any cytotoxic effect, as well as it has a natural binding affinity for mammalian cell lines. The synthesized HAp powder further successfully used to develop porous scaffold material with suitable mechanical property of ~0.8GPa compressive stress, ~1.10 GPa a hardness and ~ 30-35% porosity which is acceptable for implantation in trauma region for animal model. The histological analysis also supports the bio-affinity of processed HAp biomaterials in Wistar rat model for investigating the contact reaction and stability at the artificial or natural prosthesis interface for biomedical function. This study suggests the natural sourced fish scale-derived HAp material could be used as a suitable alternative biomaterial for tissue engineering application in near future.Keywords: biomaterials, hydroxyapatite, scaffold, mechanical property, tissue engineering
Procedia PDF Downloads 457601 Preparation and Evaluation of Poly(Ethylene Glycol)-B-Poly(Caprolactone) Diblock Copolymers with Zwitterionic End Group for Thermo-Responsive Properties
Authors: Bo Keun Lee, Doo Yeon Kwon, Ji Hoon Park, Gun Hee Lee, Ji Hye Baek, Heung Jae Chun, Young Joo Koh, Moon Suk Kim
Abstract:
Thermo-responsive materials are viscoelastic materials that undergo a sol-to-gel phase transition at a specific temperature and many materials have been developed. MPEG-b-PCL (MPC) as a thermo-responsive material contained hydrophilic and hydrophobic segments and it formed an ordered crystalline structure of hydrophobic PCL segments in aqueous solutions. The ordered crystalline structure packed tightly or aggregated and finally induced an aggregated gel through intra- and inter-molecular interactions as a function of temperature. Thus, we introduced anionic and cationic groups into the end positions of the PCL chain to alter the hydrophobicity of the PCL segment. Introducing anionic and cationic groups into the PCL end position altered their solubility by changing the crystallinity and hydrophobicity of the PCL block domains. These results indicated that the properties of the end group in the hydrophobic PCL blockand the balance between hydrophobicity and hydrophilicity affect thermo-responsivebehavior of the copolymers in aqueous solutions. Thus, we concluded that determinant of the temperature-dependent thermo-responsive behavior of MPC depend on the ionic end group in the PCL block. So, we introduced zwitterionic end groups to investigate the thermo-responsive behavior of MPC. Methoxypoly(ethylene oxide) and ε-caprolactone (CL) were randomly copolymerized that introduced varying hydrophobic PCL lengths and an MPC featuring a zwitterionic sulfobetaine (MPC-ZW) at the chain end of the PCL segment. The MPC and MPC-ZW copolymers were obtained formed sol-state at room temperature when prepared as 20-wt% aqueous solutions. The solubility of MPC decreased when the PCL block was increased from molecular weight. The solubilization time of MPC-2.4k was around 20 min and MPC-2.8k, MPC-3.0k increased to 30 min and 1 h, respectively. MPC-3.6k was not solubilized. In case of MPC-ZW 3.6k, However, the zwitterion-modified MPC copolymers were solubilized in 3–5 min. This result indicates that the zwitterionic end group of the MPC-ZW diblock copolymer increased the aqueous solubility of the diblock copolymer even when the length of the hydrophobic PCL segment was increased. MPC and MPC-ZW diblock copolymers that featuring zwitterionic end groups were synthesized successfully. The sol-to-gel phase-transition was formed that specific temperature depend on the length of the PCL hydrophobic segments introduced and on the zwitterion groups attached to the MPC chain end. This result indicated that the zwitterionic end groups reduced the hydrophobicity in the PCL block and changed the solubilization. The MPC-ZW diblock copolymer can be utilized as a potential injectable drug and cell carrier.Keywords: thermo-responsive material, zwitterionic, hydrophobic, crystallization, phase transition
Procedia PDF Downloads 508600 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana
Authors: Gautier Viaud, Paul-Henry Cournède
Abstract:
Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models
Procedia PDF Downloads 304599 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method
Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat
Abstract:
Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.Keywords: electric discharge machining (EDM), modeling, optimization, CCRD
Procedia PDF Downloads 344598 Subjectivity in Miracle Aesthetic Clinic Ambient Media Advertisement
Authors: Wegig Muwonugroho
Abstract:
Subjectivity in advertisement is a ‘power’ possessed by advertisements to construct trend, concept, truth, and ideology through subconscious mind. Advertisements, in performing their functions as message conveyors, use such visual representation to inspire what’s ideal to the people. Ambient media is advertising medium making the best use of the environment where the advertisement is located. Miracle Aesthetic Clinic (Miracle) popularizes the visual representation of its ambient media advertisement through the omission of face-image of both female mannequins that function as its ambient media models. Usually, the face of a model in advertisement is an image commodity having selling values; however, the faces of ambient media models in Miracle advertisement campaign are suppressed over the table and wall. This face concealing aspect creates not only a paradox of subjectivity but also plurality of meaning. This research applies critical discourse analysis method to analyze subjectivity in obtaining the insight of ambient media’s meaning. First, in the stage of textual analysis, the embedding attributes upon female mannequins imply that the models are denoted as the representation of modern women, which are identical with the identities of their social milieus. The communication signs aimed to be constructed are the women who lose their subjectivities and ‘feel embarrassed’ to flaunt their faces to the public because of pimples on their faces. Second, in the stage of analysis of discourse practice, it points out that ambient media as communication media has been comprehensively responded by the targeted audiences. Ambient media has a role as an actor because of its eyes-catching setting, and taking space over the area where the public are wandering around. Indeed, when the public realize that the ambient media models are motionless -unlike human- stronger relation then appears, marked by several responses from targeted audiences. Third, in the stage of analysis of social practice, soap operas and celebrity gossip shows on the television become a dominant discourse influencing advertisement meaning. The subjectivity of Miracle Advertisement corners women by the absence of women participation in public space, the representation of women in isolation, and the portrayal of women as an anxious person in the social rank when their faces suffered from pimples. The Ambient media as the advertisement campaign of Miracle is quite success in constructing a new trend discourse of face beauty that is not limited on benchmarks of common beauty virtues, but the idea of beauty can be presented by ‘when woman doesn’t look good’ visualization.Keywords: ambient media, advertisement, subjectivity, power
Procedia PDF Downloads 324597 Soils Properties of Alfisols in the Nicoya Peninsula, Guanacaste, Costa Rica
Authors: Elena Listo, Miguel Marchamalo
Abstract:
This research studies the soil properties located in the watershed of Jabillo River in the Guanacaste province, Costa Rica. The soils are classified as Alfisols (T. Haplustalfs), in the flatter parts with grazing as Fluventic Haplustalfs or as a consequence of bad drainage as F. Epiaqualfs. The objective of this project is to define the status of the soil, to use remote sensing as a tool for analyzing the evolution of land use and determining the water balance of the watershed in order to improve the efficiency of the water collecting systems. Soil samples were analyzed from trial pits taken from secondary forests, degraded pastures, mature teak plantation, and regrowth -Tectona grandis L. F.- species developed favorably in the area. Furthermore, to complete the study, infiltration measurements were taken with an artificial rainfall simulator, as well as studies of soil compaction with a penetrometer, in points strategically selected from the different land uses. Regarding remote sensing, nearly 40 data samples were collected per plot of land. The source of radiation is reflected sunlight from the beam and the underside of leaves, bare soil, streams, roads and logs, and soil samples. Infiltration reached high levels. The majority of data came from the secondary forest and mature planting due to a high proportion of organic matter, relatively low bulk density, and high hydraulic conductivity. Teak regrowth had a low rate of infiltration because the studies made regarding the soil compaction showed a partial compaction over 50 cm. The secondary forest presented a compaction layer from 15 cm to 30 cm deep, and the degraded pasture, as a result of grazing, in the first 15 cm. In this area, the alfisols soils have high content of iron oxides, a fact that causes a higher reflectivity close to the infrared region of the electromagnetic spectrum (around 700mm), as a result of clay texture. Specifically in the teak plantation where the reflectivity reaches values of 90 %, this is due to the high content of clay in relation to others. In conclusion, the protective function of secondary forests is reaffirmed with regards to erosion and high rate of infiltration. In humid climates and permeable soils, the decrease of runoff is less, however, the percolation increases. The remote sensing indicates that being clay soils, they retain moisture in a better way and it means a low reflectivity despite being fine texture.Keywords: alfisols, Costa Rica, infiltration, remote sensing
Procedia PDF Downloads 697596 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 125595 Governance Models of Higher Education Institutions
Authors: Zoran Barac, Maja Martinovic
Abstract:
Higher Education Institutions (HEIs) are a special kind of organization, with its unique purpose and combination of actors. From the societal point of view, they are central institutions in the society that are involved in the activities of education, research, and innovation. At the same time, their societal function derives complex relationships between involved actors, ranging from students, faculty and administration, business community and corporate partners, government agencies, to the general public. HEIs are also particularly interesting as objects of governance research because of their unique public purpose and combination of stakeholders. Furthermore, they are the special type of institutions from an organizational viewpoint. HEIs are often described as “loosely coupled systems” or “organized anarchies“ that implies the challenging nature of their governance models. Governance models of HEIs describe roles, constellations, and modes of interaction of the involved actors in the process of strategic direction and holistic control of institutions, taking into account each particular context. Many governance models of the HEIs are primarily based on the balance of power among the involved actors. Besides the actors’ power and influence, leadership style and environmental contingency could impact the governance model of an HEI. Analyzing them through the frameworks of institutional and contingency theories, HEI governance models originate as outcomes of their institutional and contingency adaptation. HEIs tend to fit to institutional context comprised of formal and informal institutional rules. By fitting to institutional context, HEIs are converging to each other in terms of their structures, policies, and practices. On the other hand, contingency framework implies that there is no governance model that is suitable for all situations. Consequently, the contingency approach begins with identifying contingency variables that might impact a particular governance model. In order to be effective, the governance model should fit to contingency variables. While the institutional context creates converging forces on HEI governance actors and approaches, contingency variables are the causes of divergence of actors’ behavior and governance models. Finally, an HEI governance model is a balanced adaptation of the HEIs to the institutional context and contingency variables. It also encompasses roles, constellations, and modes of interaction of involved actors influenced by institutional and contingency pressures. Actors’ adaptation to the institutional context brings benefits of legitimacy and resources. On the other hand, the adaptation of the actors’ to the contingency variables brings high performance and effectiveness. HEI governance models outlined and analyzed in this paper are collegial, bureaucratic, entrepreneurial, network, professional, political, anarchical, cybernetic, trustee, stakeholder, and amalgam models.Keywords: governance, governance models, higher education institutions, institutional context, situational context
Procedia PDF Downloads 337594 Comparative Study of Static and Dynamic Representations of the Family Structure and Its Clinical Utility
Authors: Marietta Kékes Szabó
Abstract:
The patterns of personality (mal)function and the individuals’ psychosocial environment influence the healthy status collectively and may lie in the background of psychosomatic disorders. Although the patients with their diversified symptoms usually do not have any organic problems, the experienced complaint, the fear of serious illness and the lack of social support often lead to increased anxiety and further enigmatic symptoms. The role of the family system and its atmosphere seem to be very important in this process. More studies explored the characteristics of dysfunctional family organization: inflexible family structure, hidden conflicts that are not spoken about by the family members during their daily interactions, undefined role boundaries, neglect or overprotection of the children by the parents and coalition between generations. However, questionnaires that are used to measure the properties of the family system are able to explore only its unit and cannot pay attention to the dyadic interactions, while the representation of the family structure by a figure placing test gives us a new perspective to better understand the organization of the (sub)system(s). Furthermore, its dynamic form opens new perspectives to explore the family members’ joint representations, which gives us the opportunity to know more about the flexibility of cohesion and hierarchy of the given family system. In this way, the communication among the family members can be also examined. The aim of my study was to collect a great number of information about the organization of psychosomatic families. In our research we used Gehring’s Family System Test (FAST) both in static and dynamic forms to mobilize the family members’ mental representations about their family and to get data in connection with their individual representations as well as cooperation. There were four families in our study, all of them with a young adult person. Two families with healthy participants and two families with asthmatic patient(s) were involved in our research. The family members’ behavior that could be observed during the dynamic situation was recorded on video for further data analysis with Noldus Observer XT 8.0 program software. In accordance with the previous studies, our results show that the family structure of the families with at least one psychosomatic patient is more rigid than it was found in the control group and the certain (typical, ideal, and conflict) dynamic representations reflected mainly the most dominant family member’s individual concept. The behavior analysis also confirmed the intensified role of the dominant person(s) in the family life, thereby influencing the family decisions, the place of the other family members, as well as the atmosphere of the interactions, which could also be grasped well by the applied methods. However, further research is needed to learn more about the phenomenon that can open the door for new therapeutic approaches.Keywords: psychosomatic families, family structure, family system test (FAST), static and dynamic representations, behavior analysis
Procedia PDF Downloads 392593 Date Palm Fruits from Oman Attenuates Cognitive and Behavioral Defects and Reduces Inflammation in a Transgenic Mice Model of Alzheimer's Disease
Authors: M. M. Essa, S. Subash, M. Akbar, S. Al-Adawi, A. Al-Asmi, G. J. Guillemein
Abstract:
Transgenic (tg) mice which contain an amyloid precursor protein (APP) gene mutation, develop extracellular amyloid beta (Aβ) deposition in the brain, and severe memory and behavioral deficits with age. These mice serve as an important animal model for testing the efficacy of novel drug candidates for the treatment and management of symptoms of Alzheimer's disease (AD). Several reports have suggested that oxidative stress is the underlying cause of Aβ neurotoxicity in AD. Date palm fruits contain very high levels of antioxidants and several medicinal properties that may be useful for improving the quality of life in AD patients. In this study, we investigated the effect of dietary supplementation of Omani date palm fruits on the memory, anxiety and learning skills along with inflammation in an AD mouse model containing the double Swedish APP mutation (APPsw/Tg2576). The experimental groups of APP-transgenic mice from the age of 4 months were fed custom-mix diets (pellets) containing 2% and 4% Date palm fruits. We assessed spatial memory and learning ability, psychomotor coordination, and anxiety-related behavior in Tg and wild-type mice at the age of 4-5 months and 18-19 months using the Morris water maze test, rota rod test, elevated plus maze test, and open field test. Further, inflammatory parameters also analyzed. APPsw/Tg2576 mice that were fed a standard chow diet without dates showed significant memory deficits, increased anxiety-related behavior, and severe impairment in spatial learning ability, position discrimination learning ability and motor coordination along with increased inflammation compared to the wild type mice on the same diet, at the age of 18-19 months In contrast, PPsw/Tg2576 mice that were fed a diet containing 2% and 4% dates showed a significant improvements in memory, learning, locomotor function, and anxiety with reduced inflammatory markers compared to APPsw/Tg2576 mice fed the standard chow diet. Our results suggest that dietary supplementation with dates may slow the progression of cognitive and behavioral impairments in AD. The exact mechanism is still unclear and further extensive research needed.Keywords: Alzheimer's disease, date palm fruits, Oman, cognitive decline, memory loss, anxiety, inflammation
Procedia PDF Downloads 424592 Systematic Study of Structure Property Relationship in Highly Crosslinked Elastomers
Authors: Natarajan Ramasamy, Gurulingamurthy Haralur, Ramesh Nivarthu, Nikhil Kumar Singha
Abstract:
Elastomers are polymeric materials with varied backbone architectures ranging from linear to dendrimeric structures and wide varieties of monomeric repeat units. These elastomers show strongly viscous and weakly elastic when it is not cross-linked. But when crosslinked, based on the extent the properties of these elastomers can range from highly flexible to highly stiff nature. Lightly cross-linked systems are well studied and reported. Understanding the nature of highly cross-linked rubber based upon chemical structure and architecture is critical for varieties of applications. One of the critical parameters is cross-link density. In the current work, we have studied the highly cross-linked state of linear, lightly branched to star-shaped branched elastomers and determined the cross-linked density by using different models. Change in hardness, shift in Tg, change in modulus and swelling behavior were measured experimentally as a function of the extent of curing. These properties were analyzed using varied models to determine cross-link density. We used hardness measurements to examine cure time. Hardness to the extent of curing relationship is determined. It is well known that micromechanical transitions like Tg and storage modulus are related to the extent of crosslinking. The Tg of the elastomer in different crosslinked state was determined by DMA, and based on plateau modulus the crosslink density is estimated by using Nielsen’s model. Usually for lightly crosslinked systems, based on equilibrium swelling ratio in solvent the cross link density is estimated by using Flory–Rhener model. When it comes to highly crosslinked system, Flory-Rhener model is not valid because of smaller chain length. So models based on the assumption of polymer as a Non-Gaussian chain like 1) Helmis–Heinrich–Straube (HHS) model, 2) Gloria M.gusler and Yoram Cohen Model, 3) Barbara D. Barr-Howell and Nikolaos A. Peppas model is used for estimating crosslink density. In this work, correction factors are determined to the existing models and based upon it structure-property relationship of highly crosslinked elastomers was studied.Keywords: dynamic mechanical analysis, glass transition temperature, parts per hundred grams of rubber, crosslink density, number of networks per unit volume of elastomer
Procedia PDF Downloads 166591 Virtual Reality in COVID-19 Stroke Rehabilitation: Preliminary Outcomes
Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini
Abstract:
Background: There is growing evidence that Cerebral Vascular Accident (CVA) can be a consequence of Covid-19 infection. Understanding novel treatment approaches are important in optimizing patient outcomes. Case: This case explores the use of Virtual Reality (VR) in the treatment of a 23-year-old COVID-positive female presenting with left hemiparesis in August 2020. Imaging showed right globus pallidus, thalamus, and internal capsule ischemic stroke. Conventional rehabilitation was started two weeks later, with virtual reality (VR) included. This game-based virtual reality (VR) technology developed for stroke patients was based on upper extremity exercises and functions for stroke. Physical examination showed left hemiparesis with muscle strength 3/5 in the upper extremity and 4/5 in the lower extremity. The range of motion of the shoulder was 90-100 degrees. The speech exam showed a mild decrease in fluency. Mild lower lip dynamic asymmetry was seen. Babinski was positive on the left. Gait speed was decreased (75 steps per minute). Intervention: Our game-based VR system was developed based on upper extremity physiotherapy exercises for post-stroke patients to increase the active, voluntary movement of the upper extremity joints and improve the function. The conventional program was initiated with active exercises, shoulder sanding for joint ROMs, walking shoulder, shoulder wheel, and combination movements of the shoulder, elbow, and wrist joints, alternative flexion-extension, pronation-supination movements, Pegboard and Purdo pegboard exercises. Also, fine movements included smart gloves, biofeedback, finger ladder, and writing. The difficulty of the game increased at each stage of the practice with progress in patient performances. Outcome: After 6 weeks of treatment, gait and speech were normal and upper extremity strength was improved to near normal status. No adverse effects were noted. Conclusion: This case suggests that VR is a useful tool in the treatment of a patient with covid-19 related CVA. The safety of newly developed instruments for such cases provides new approaches to improve the therapeutic outcomes and prognosis as well as increased satisfaction rate among patients.Keywords: covid-19, stroke, virtual reality, rehabilitation
Procedia PDF Downloads 144590 The Determination of Pb and Zn Phytoremediation Potential and Effect of Interaction between Cadmium and Zinc on Metabolism of Buckwheat (Fagopyrum Esculentum)
Authors: Nurdan Olguncelik Kaplan, Aysen Akay
Abstract:
Nowadays soil pollution has become a global problem. External added polluters to the soil are destroying and changing the structure of the soil and the problems are becoming more complex and in this sense the correction of these problems is going to be harder and more costly. Cadmium has got a fast mobility in the soil and plant system because of that cadmium can interfere very easily to the human and animal food chain and in the same time this can be very dangerous. The cadmium which is absorbed and stored by the plants is causing to many metabolic changes of the plants like; protein synthesis, nitrogen and carbohydrate metabolism, enzyme (nitrate reductase) activation, photo and chlorophyll synthesis. The biological function of cadmium is not known over the plants and it is not a necessary element. The plant is generally taking in small amounts the cadmium and this element is competing with the zinc. Cadmium is causing root damages. Buckwheat (Fagopyrum esculentum) is an important nutraceutical because of its high content of flavonoids, minerals and vitamins, and their nutritionally balanced amino-acid composition. Buckwheat has relatively high biomass productivity, is adapted to many areas of the world, and can flourish in sterile fields; therefore buckwheat plants are widely used for the phytoremediation process.The aim of this study were to evaluate the phytoremediation capacity of the high-yielding plant Buckwheat (Fagopyrum esculentum) in soils contaminated with Cd and Zn. The soils were applied to differrent doses cd(0-12.5-25-50-100 mg Cd kg−1 soil in the form of 3CdSO4.8H2O ) and Zn (0-10-30 mg Zn kg−1 soil in the form of ZnSO4.7H2O) and incubated about 60 days. Later buckwheat seeds were sown and grown for three mounth under greenhouse conditions. The test plants were irrigated by using pure water after the planting process. Buckwheat seeds (Gunes and Aktas species) were taken from Bahri Dagdas International Agricultural Research. After harvest, Cd and Zn concentrations of plant biomass and grain, yield and translocation factors (TFs) for Cd and Cd were determined. Cadmium accumulation in biomass and grain significantly increased in dose-dependent manner. Long term field trials are required to further investigate the potential of buckwheat to reclaimed the soil. But this could be undertaken in conjunction with actual remediation schemes. However, the differences in element accumulation among the genotypes were affected more by the properties of genotypes than by the soil properties. Gunes genotype accumulated higher lead than Aktas genotypes.Keywords: buckwheat, cadmium, phytoremediation, zinc
Procedia PDF Downloads 418589 Exploration of Building Information Modelling Software to Develop Modular Coordination Design Tool for Architects
Authors: Muhammad Khairi bin Sulaiman
Abstract:
The utilization of Building Information Modelling (BIM) in the construction industry has provided an opportunity for designers in the Architecture, Engineering and Construction (AEC) industry to proceed from the conventional method of using manual drafting to a way that creates alternative designs quickly, produces more accurate, reliable and consistent outputs. By using BIM Software, designers can create digital content that manipulates the use of data using the parametric model of BIM. With BIM software, more alternative designs can be created quickly and design problems can be explored further to produce a better design faster than conventional design methods. Generally, BIM is used as a documentation mechanism and has not been fully explored and utilised its capabilities as a design tool. Relative to the current issue, Modular Coordination (MC) design as a sustainable design practice is encouraged since MC design will reduce material wastage through standard dimensioning, pre-fabrication, repetitive, modular construction and components. However, MC design involves a complex process of rules and dimensions. Therefore, a tool is needed to make this process easier. Since the parameters in BIM can easily be manipulated to follow MC rules and dimensioning, thus, the integration of BIM software with MC design is proposed for architects during the design stage. With this tool, there will be an improvement in acceptance and practice in the application of MC design effectively. Consequently, this study will analyse and explore the function and customization of BIM objects and the capability of BIM software to expedite the application of MC design during the design stage for architects. With this application, architects will be able to create building models and locate objects within reference modular grids that adhere to MC rules and dimensions. The parametric modeling capabilities of BIM will also act as a visual tool that will further enhance the automation of the 3-Dimensional space planning modeling process. (Method) The study will first analyze and explore the parametric modeling capabilities of rule-based BIM objects, which eventually customize a reference grid within the rules and dimensioning of MC. Eventually, the approach will further enhance the architect's overall design process and enable architects to automate complex modeling, which was nearly impossible before. A prototype using a residential quarter will be modeled. A set of reference grids guided by specific MC rules and dimensions will be used to develop a variety of space planning and configuration. With the use of the design, the tool will expedite the design process and encourage the use of MC Design in the construction industry.Keywords: building information modeling, modular coordination, space planning, customization, BIM application, MC space planning
Procedia PDF Downloads 85588 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images
Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi
Abstract:
Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis
Procedia PDF Downloads 62587 An Investigation into the Use of an Atomistic, Hermeneutic, Holistic Approach in Education Relating to the Architectural Design Process
Authors: N. Pritchard
Abstract:
Within architectural education, students arrive fore-armed with; their life-experience; knowledge gained from subject-based learning; their brains and more specifically their imaginations. The learning-by-doing that they embark on in studio-based/project-based learning calls for supervision that allows the student to proactively undertake research and experimentation with design solution possibilities. The degree to which this supervision includes direction is subject to debate and differing opinion. It can be argued that if the student is to learn-by-doing, then design decision making within the design process needs to be instigated and owned by the student so that they have the ability to personally reflect on and evaluate those decisions. Within this premise lies the problem that the student's endeavours can become unstructured and unfocused as they work their way into a new and complex activity. A resultant weakness can be that the design activity is compartmented and not holistic or comprehensive, and therefore, the student's reflections are consequently impoverished in terms of providing a positive, informative feedback loop. The construct proffered in this paper is that a supportive 'armature' or 'Heuristic-Framework' can be developed that facilitates a holistic approach and reflective learning. The normal explorations of architectural design comprise: Analysing the site and context, reviewing building precedents, assimilating the briefing information. However, the student can still be compromised by 'not knowing what they need to know'. The long-serving triad 'Firmness, Commodity and Delight' provides a broad-brush framework of considerations to explore and integrate into good design. If this were further atomised in subdivision formed from the disparate aspects of architectural design that need to be considered within the design process, then the student could sieve through the facts more methodically and reflectively in terms of considering their interrelationship conflict and alliances. The words facts and sieve hold the acronym of the aspects that form the Heuristic-Framework: Function, Aesthetics, Context, Tectonics, Spatial, Servicing, Infrastructure, Environmental, Value and Ecological issues. The Heuristic could be used as a Hermeneutic Model with each aspect of design being focused on and considered in abstraction and then considered in its relation to other aspect and the design proposal as a whole. Importantly, the heuristic could be used as a method for gathering information and enhancing the design brief. The more poetic, mysterious, intuitive, unconscious processes should still be able to occur for the student. The Heuristic-Framework should not be seen as comprehensive prescriptive formulaic or inhibiting to the wide exploration of possibilities and solutions within the architectural design process.Keywords: atomistic, hermeneutic, holistic, approach architectural design studio education
Procedia PDF Downloads 260586 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing
Authors: Tolulope Aremu
Abstract:
This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving
Procedia PDF Downloads 35