Search results for: matrix method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20097

Search results for: matrix method

19407 Computational Fluid Dynamics Based Analysis of Heat Exchanging Performance of Rotary Thermal Wheels

Authors: H. M. D. Prabhashana Herath, M. D. Anuradha Wickramasinghe, A. M. C. Kalpani Polgolla, R. A. C. Prasad Ranasinghe, M. Anusha Wijewardane

Abstract:

The demand for thermal comfort in buildings in hot and humid climates increases progressively. In general, buildings in hot and humid climates spend more than 60% of the total energy cost for the functionality of the air conditioning (AC) system. Hence, it is required to install energy efficient AC systems or integrate energy recovery systems for both new and/or existing AC systems whenever possible, to reduce the energy consumption by the AC system. Integrate a Rotary Thermal Wheel as the energy recovery device of an existing AC system has shown very promising with attractive payback periods of less than 5 years. A rotary thermal wheel can be located in the Air Handling Unit (AHU) of a central AC system to recover the energy available in the return air stream. During this study, a sensitivity analysis was performed using a CFD (Computational Fluid Dynamics) software to determine the optimum design parameters (i.e., rotary speed and parameters of the matrix profile) of a rotary thermal wheel for hot and humid climates. The simulations were performed for a sinusoidal matrix geometry. Variation of sinusoidal matrix parameters, i.e., span length and height, were also analyzed to understand the heat exchanging performance and the induced pressure drop due to the air flow. The results show that the heat exchanging performance increases when increasing the wheel rpm. However, the performance increment rate decreases when increasing the rpm. As a result, it is more advisable to operate the wheel at 10-20 rpm. For the geometry, it was found that the sinusoidal geometries with lesser spans and higher heights have higher heat exchanging capabilities. Considering the sinusoidal profiles analyzed during the study, the geometry with 4mm height and 3mm width shows better performance than the other combinations.

Keywords: air conditioning, computational fluid dynamics, CFD, energy recovery, heat exchangers

Procedia PDF Downloads 118
19406 Poly(L-Lactic Acid) Scaffolds for Bone Tissue Engineering

Authors: Aleksandra BužArovska, Gordana Bogoeva Gaceva

Abstract:

Biodegradable polymers have received significant scientific attention in tissue engineering (TE) application, in particular their composites consisting of inorganic nanoparticles. In the last 15 years, they are subject of intensive research by many groups, aiming to develop polymer scaffolds with defined biodegradability, porosity and adequate mechanical stability. The most important characteristic making these materials attractive for TE is their biodegradability, a process that could be time controlled and long enough to enable generation of a new tissue as a replacement for the degraded polymer scaffold. In this work poly(L-lactic acid) scaffolds, filled with TiO2 nanoparticles functionalized with oleic acid, have been prepared by thermally induced phase separation method (TIPS). The functionalization of TiO2 nanoparticles with oleic acid was performed in order to improve the nanoparticles dispersibility within the polymer matrix and at the same time to inhibit the cytotoxicity of the nanofiller. The oleic acid was chosen as amphiphilic molecule belonging to the fatty acid family because of its non-toxicity and possibility for mediation between the hydrophilic TiO2 nanoparticles and hydrophobic PLA matrix. The produced scaffolds were characterized with thermogravimetric analysis (TGA), differential scanning calorimetry (DSC), scanning electron microscopy (SEM) and mechanical compression measurements. The bioactivity for bone tissue engineering application was tested in supersaturated simulated body fluid. The degradation process was followed by Fourier transform infrared spectroscopy (FTIR). The results showed anisotropic morphology with elongated open pores (100 µm), high porosity (around 92%) and perfectly dispersed nanofiller. The compression moduli up to 10 MPa were identified independent on the nanofiller content. Functionalized TiO2 nanoparticles induced formation of hydroxyapatite clusters as much as unfunctionalized TiO2. The prepared scaffolds showed properties ideal for scaffold vascularization, cell attachment, growth and proliferation.

Keywords: biodegradation, bone tissue engineering, mineralization, PLA scaffolds

Procedia PDF Downloads 257
19405 Fuzzy Multi-Criteria Decision-Making Framework for Risk Management in Construction Supply Chain

Authors: Abdullah Ali Salamai

Abstract:

Risk management in the construction supply chain (CSC) is vital in construction project risks. CSC has various risks affecting product quality and project timeline, such as operational, social, financial, technical, design, and safety risks. These risks should be mitigated in project construction. So, this paper proposed a set of technologies to overcome risks in CSC, like artificial intelligence (AI), blockchain, data analytics, and IoT, to select the best one. So, the multi-criteria decision-making (MCDM) methodology is used to deal with various risks. The Multi-Attribute Utility Theory (MAUT) method is used to rank technologies. The weights of risks are obtained by the average method by using the decision matrix. The MCDM methodology is integrated with a fuzzy set to overcome uncertainty data. Experts used triangular fuzzy numbers to express their opinions instead of exact numbers. These allow the model to overcome inconsistent and vague data. The MCDM methodology was applied to 18 risks and 5 technologies. The results show that social risks have the highest weight. AI is the best technology for overcoming risks in CSC. AI can integrate with CSC from raw data to final products to deliver to the user.

Keywords: risk management, construction supply chain, fuzzy sets, multi-criteria decision making, supply chain management, artificial intelligence, blockchain

Procedia PDF Downloads 12
19404 Fe Modified Tin Oxide Thin Film Based Matrix for Reagentless Uric Acid Biosensing

Authors: Kashima Arora, Monika Tomar, Vinay Gupta

Abstract:

Biosensors have found potential applications ranging from environmental testing and biowarfare agent detection to clinical testing, health care, and cell analysis. This is driven in part by the desire to decrease the cost of health care and to obtain precise information more quickly about the health status of patient by the development of various biosensors, which has become increasingly prevalent in clinical testing and point of care testing for a wide range of biological elements. Uric acid is an important byproduct in human body and a number of pathological disorders are related to its high concentration in human body. In past few years, rapid growth in the development of new materials and improvements in sensing techniques have led to the evolution of advanced biosensors. In this context, metal oxide thin film based matrices due to their bio compatible nature, strong adsorption ability, high isoelectric point (IEP) and abundance in nature have become the materials of choice for recent technological advances in biotechnology. In the past few years, wide band-gap metal oxide semiconductors including ZnO, SnO₂ and CeO₂ have gained much attention as a matrix for immobilization of various biomolecules. Tin oxide (SnO₂), wide band gap semiconductor (Eg =3.87 eV), despite having multifunctional properties for broad range of applications including transparent electronics, gas sensors, acoustic devices, UV photodetectors, etc., it has not been explored much for biosensing purpose. To realize a high performance miniaturized biomolecular electronic device, rf sputtering technique is considered to be the most promising for the reproducible growth of good quality thin films, controlled surface morphology and desired film crystallization with improved electron transfer property. Recently, iron oxide and its composites have been widely used as matrix for biosensing application which exploits the electron communication feature of Fe, for the detection of various analytes using urea, hemoglobin, glucose, phenol, L-lactate, H₂O₂, etc. However, to the authors’ knowledge, no work is being reported on modifying the electronic properties of SnO₂ by implanting with suitable metal (Fe) to induce the redox couple in it and utilizing it for reagentless detection of uric acid. In present study, Fe implanted SnO₂ based matrix has been utilized for reagentless uric acid biosensor. Implantation of Fe into SnO₂ matrix is confirmed by energy-dispersive X-Ray spectroscopy (EDX) analysis. Electrochemical techniques have been used to study the response characteristics of Fe modified SnO₂ matrix before and after uricase immobilization. The developed uric acid biosensor exhibits a high sensitivity to about 0.21 mA/mM and a linear variation in current response over concentration range from 0.05 to 1.0 mM of uric acid besides high shelf life (~20 weeks). The Michaelis-Menten kinetic parameter (Km) is found to be relatively very low (0.23 mM), which indicates high affinity of the fabricated bioelectrode towards uric acid (analyte). Also, the presence of other interferents present in human serum has negligible effect on the performance of biosensor. Hence, obtained results highlight the importance of implanted Fe:SnO₂ thin film as an attractive matrix for realization of reagentless biosensors towards uric acid.

Keywords: Fe implanted tin oxide, reagentless uric acid biosensor, rf sputtering, thin film

Procedia PDF Downloads 171
19403 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe

Authors: Elsadig Naseraddeen Ahmed Mohamed

Abstract:

In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.

Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon

Procedia PDF Downloads 168
19402 Calycosin Ameliorates Osteoarthritis by Regulating the Imbalance Between Chondrocyte Synthesis and Catabolism

Authors: Hong Su, Qiuju Yan, Wei Du, En Hu, Zhaoyu Yang, Wei Zhang, Yusheng Li, Tao Tang, Wang yang, Shushan Zhao

Abstract:

Osteoarthritis (OA) is a severe chronic inflammatory disease. As the main active component of Astragalus mongholicus Bunge, a classic traditional ethnic herb, calycosin exhibits anti-inflammatory action and its mechanism of exact targets for OA have yet to be determined. In this study, we established an anterior cruciate ligament transection (ACLT) mouse model. Mice were randomized to sham, OA, and calycosin groups. Cartilage synthesis markers type II collagen (Col-2) and SRY-Box Transcription Factor 9 (Sox-9) increased significantly after calycosin gavage. While cartilage matrix degradation index cyclooxygenase-2 (COX-2), phosphor-epidermal growth factor receptor (p-EGFR), and matrix metalloproteinase-9 (MMP9) expression were decreased. With the help of network pharmacology and molecular docking, these results were confirmed in chondrocyte ATDC5 cells. Our results indicated that the calycosin treatment significantly improved cartilage damage, this was probably attributed to reversing the imbalance between chondrocyte synthesis and catabolism.

Keywords: calycosin, osteoarthritis, network pharmacology, molecular docking, inflammatory, cyclooxygenase 2

Procedia PDF Downloads 89
19401 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest

Authors: Peter Baji

Abstract:

In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.

Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study

Procedia PDF Downloads 184
19400 A Hybrid Method for Determination of Effective Poles Using Clustering Dominant Pole Algorithm

Authors: Anuj Abraham, N. Pappa, Daniel Honc, Rahul Sharma

Abstract:

In this paper, an analysis of some model order reduction techniques is presented. A new hybrid algorithm for model order reduction of linear time invariant systems is compared with the conventional techniques namely Balanced Truncation, Hankel Norm reduction and Dominant Pole Algorithm (DPA). The proposed hybrid algorithm is known as Clustering Dominant Pole Algorithm (CDPA) is able to compute the full set of dominant poles and its cluster center efficiently. The dominant poles of a transfer function are specific eigenvalues of the state space matrix of the corresponding dynamical system. The effectiveness of this novel technique is shown through the simulation results.

Keywords: balanced truncation, clustering, dominant pole, Hankel norm, model reduction

Procedia PDF Downloads 590
19399 Mode II Fracture Toughness of Hybrid Fiber Reinforced Concrete

Authors: H. S. S Abou El-Mal, A. S. Sherbini, H. E. M. Sallam

Abstract:

Mode II fracture toughness (KIIc) of fiber reinforced concrete has been widely investigated under various patterns of testing geometries. The effect of fiber type, concrete matrix properties, and testing mechanisms were extensively studied. The area of hybrid fiber addition shows a lake of reported research data. In this paper an experimental investigation of hybrid fiber embedded in high strength concrete matrix is reported. Three different types of fibers; namely steel (S), glass (G), and polypropylene (PP) fibers were mixed together in four hybridization patterns, (S/G), (S/PP), (G/PP), (S/G/PP) with constant cumulative volume fraction (Vf) of 1.5%. The concrete matrix properties were kept the same for all hybrid fiber reinforced concrete patterns. In an attempt to estimate a fairly accepted value of fracture toughness KIIc, four testing geometries and loading types are employed in this investigation. Four point shear, Brazilian notched disc, double notched cube, and double edge notched specimens are investigated in a trial to avoid the limitations and sensitivity of each test regarding geometry, size effect, constraint condition, and the crack length to specimen width ratio a/w. The addition of all hybridization patterns of fiber reduced the compressive strength and increased mode II fracture toughness in pure mode II tests. Mode II fracture toughness of concrete KIIc decreased with the increment of a/w ratio for all concretes and test geometries. Mode II fracture toughness KIIc is found to be sensitive to the hybridization patterns of fiber. The (S/PP) hybridization pattern showed higher values than all other patterns, while the (S/G/PP) showed insignificant enhancement on mode II fracture toughness (KIIc). Four point shear (4PS) test set up reflects the most reliable values of mode II fracture toughness KIIc of concrete. Mode II fracture toughness KIIc of concrete couldn’t be assumed as a real material property.

Keywords: fiber reinforced concrete, Hybrid fiber, Mode II fracture toughness, testing geometry

Procedia PDF Downloads 319
19398 Exploring the Connectedness of Ad Hoc Mesh Networks in Rural Areas

Authors: Ibrahim Obeidat

Abstract:

Reaching a fully-connected network of mobile nodes in rural areas got a great attention between network researchers. This attention rose due to the complexity and high costs while setting up the needed infrastructures for these networks, in addition to the low transmission range these nodes has. Terranet technology, as an example, employs ad-hoc mesh network where each node has a transmission range not exceed one kilometer, this means that every two nodes are able to communicate with each other if they are just one kilometer far from each other, otherwise a third-party will play the role of the “relay”. In Terranet, and as an idea to reduce network setup cost, every node in the network will be considered as a router that is responsible of forwarding data between other nodes which result in a decentralized collaborative environment. Most researches on Terranet presents the idea of how to encourage mobile nodes to become more cooperative by letting their devices in “ON” state as long as possible while accepting to play the role of relay (router). This research presents the issue of finding the percentage of nodes in ad-hoc mesh network within rural areas that should play the role of relay at every time slot, relating to what is the actual area coverage of nodes in order to have the network reach the fully-connectivity. Far from our knowledge, till now there is no current researches discussed this issue. The research is done by making an implementation that depends on building adjacency matrix as an indicator to the connectivity between network members. This matrix is continually updated until each value in it refers to the number of hubs that should be followed to reach from one node to another. After repeating the algorithm on different area sizes, different coverage percentages for each size, and different relay percentages for several times, results extracted shows that for area coverage less than 5% we need to have 40% of the nodes to be relays, where 10% percentage is enough for areas with node coverage greater than 5%.

Keywords: ad-hoc mesh networks, network connectivity, mobile ad-hoc networks, Terranet, adjacency matrix, simulator, wireless sensor networks, peer to peer networks, vehicular Ad hoc networks, relay

Procedia PDF Downloads 266
19397 Novel Recommender Systems Using Hybrid CF and Social Network Information

Authors: Kyoung-Jae Kim

Abstract:

Collaborative Filtering (CF) is a popular technique for the personalization in the E-commerce domain to reduce information overload. In general, CF provides recommending items list based on other similar users’ preferences from the user-item matrix and predicts the focal user’s preference for particular items by using them. Many recommender systems in real-world use CF techniques because it’s excellent accuracy and robustness. However, it has some limitations including sparsity problems and complex dimensionality in a user-item matrix. In addition, traditional CF does not consider the emotional interaction between users. In this study, we propose recommender systems using social network and singular value decomposition (SVD) to alleviate some limitations. The purpose of this study is to reduce the dimensionality of data set using SVD and to improve the performance of CF by using emotional information from social network data of the focal user. In this study, we test the usability of hybrid CF, SVD and social network information model using the real-world data. The experimental results show that the proposed model outperforms conventional CF models.

Keywords: recommender systems, collaborative filtering, social network information, singular value decomposition

Procedia PDF Downloads 277
19396 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer

Authors: A. J. Cobley, L. Krishnan

Abstract:

The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.

Keywords: degassing, low frequency ultrasound, polymer composites, voids

Procedia PDF Downloads 292
19395 Ascidian Styela rustica Proteins’ Structural Domains Predicted to Participate in the Tunic Formation

Authors: M. I. Tyletc, O. I. Podgornya, T. G. Shaposhnikova, S. V. Shabelnikov, A. G. Mittenberg, M. A. Daugavet

Abstract:

Ascidiacea is the most numerous class of the Tunicata subtype. These chordates' distinctive feature of the anatomical structure is a tunic consisting of cellulose fibrils, protein molecules, and single cells. The mechanisms of the tunic formation are not known in detail; tunic formation could be used as the model system for studying the interaction of cells with the extracellular matrix. Our model species is the ascidian Styela rustica, which is prevalent in benthic communities of the White Sea. As previously shown, the tunic formation involves morula blood cells, which contain the major 48 kDa protein p48. P48 participation in the tunic formation was proved using antibodies against the protein. The nature of the protein and its function remains unknown. The current research aims to determine the amino acid sequence of p48, as well as to clarify its role in the tunic formation. The peptides that make up the p48 amino acid sequence were determined by mass spectrometry. A search for peptides in protein sequence databases identified sequences homologous to p48 in Styela clava, Styela plicata, and Styela canopus. Based on sequence alignment, their level of similarity was determined as 81-87%. The correspondent sequence of ascidian Styela canopus was used for further analysis. The Styela rustica p48 sequence begins with a signal peptide, which could indicate that the protein is secretory. This is consistent with experimentally obtained data: the contents of morula cells secreted in the tunic matrix. The isoelectric point of p48 is 9.77, which is consistent with the experimental results of acid electrophoresis of morula cell proteins. However, the molecular weight of the amino acid sequence of ascidian Styela canopus is 103 kDa, so p48 of Styela rustica is a shorter homolog. The search for conservative functional domains revealed the presence of two Ca-binding EGF-like domains, thrombospondin (TSP1) and tyrosinase domains. The p48 peptides determined by mass spectrometry fall into the region of the sequence corresponding to the last two domains and have amino acid substitutions as compared to Styela canopus homolog. The tyrosinase domain (pfam00264) is known to be part of the phenoloxidase enzyme, which participates in melanization processes and the immune response. The thrombospondin domain (smart00209) interacts with a wide range of proteins, and is involved in several biological processes, including coagulation, cell adhesion, modulation of intercellular and cell-matrix interactions, angiogenesis, wound healing and tissue remodeling. It can be assumed that the tyrosinase domain in p48 plays the role of the phenoloxidase enzyme, and TSP1 provides a link between the extracellular matrix and cell surface receptors, and may also be responsible for the repair of the tunic. The results obtained are consistent with experimental data on p48. The domain organization of protein suggests that p48 is an enzyme involved in the tunic tunning and is an important regulator of the organization of the extracellular matrix.

Keywords: ascidian, p48, thrombospondin, tyrosinase, tunic, tunning

Procedia PDF Downloads 100
19394 Student Loan Debt among Students with Disabilities

Authors: Kaycee Bills

Abstract:

This study will determine if students with disabilities have higher student loan debt payments than other student populations. The hypothesis was that students with disabilities would have significantly higher student loan debt payments than other students due to the length of time they spend in school. Using the Bachelorette and Beyond Study Wave 2015/017 dataset, quantitative methods were employed. These data analysis methods included linear regression and a correlation matrix. Due to the exploratory nature of the study, the significance levels for the overall model and each variable were set at .05. The correlation matrix demonstrated that students with certain types of disabilities are more likely to fall under higher student loan payment brackets than students without disabilities. These results also varied among the different types of disabilities. The result of the overall linear regression model was statistically significant (p = .04). Despite the overall model being statistically significant, the majority of the significance values for the different types of disabilities were null. However, several other variables had statistically significant results, such as veterans, people of minority races, and people who attended private schools. Implications for how this impacts the economy, capitalism, and financial wellbeing of various students are discussed.

Keywords: disability, student loan debt, higher education, social work

Procedia PDF Downloads 157
19393 Mueller Matrix Polarimetry for Analysis Scattering Biological Fluid Media

Authors: S. Cherif, A. Medjahed, M. Bouafia, A. Manallah

Abstract:

A light wave is characterized by 4 characteristics: its amplitude, its frequency, its phase and the direction of polarization of its luminous vector (the electric field). It is in this last characteristic that we will be interested. The polarization of the light was introduced in order to describe the vectorial behavior of the light; it describes the way in which the electric field evolves in a point of space. Our work consists in studying diffusing mediums. Different types of biological fluids were selected to study the evolution of each with increasing scattering power of the medium, and in the same time to make a comparison between them. When crossing these mediums, the light undergoes modifications and/or deterioration of its initial state of polarization. This phenomenon is related to the properties of the medium, the idea is to compare the characteristics of the entering and outgoing light from the studied medium by a white light. The advantage of this model is that it is experimentally accessible workable intensity measurements with CCD sensors and allows operation in 2D. The latter information is used to discriminate some physical properties of the studied areas. We chose four types of milk to study the evolution of each with increasing scattering power of the medium.

Keywords: light polarization, Mueller matrix, Mueller images, diffusing medium, milk

Procedia PDF Downloads 322
19392 Constant Order Predictor Corrector Method for the Solution of Modeled Problems of First Order IVPs of ODEs

Authors: A. A. James, A. O. Adesanya, M. R. Odekunle, D. G. Yakubu

Abstract:

This paper examines the development of one step, five hybrid point method for the solution of first order initial value problems. We adopted the method of collocation and interpolation of power series approximate solution to generate a continuous linear multistep method. The continuous linear multistep method was evaluated at selected grid points to give the discrete linear multistep method. The method was implemented using a constant order predictor of order seven over an overlapping interval. The basic properties of the derived corrector was investigated and found to be zero stable, consistent and convergent. The region of absolute stability was also investigated. The method was tested on some numerical experiments and found to compete favorably with the existing methods.

Keywords: interpolation, approximate solution, collocation, differential system, half step, converges, block method, efficiency

Procedia PDF Downloads 323
19391 Motion Planning and Simulation Design of a Redundant Robot for Sheet Metal Bending Processes

Authors: Chih-Jer Lin, Jian-Hong Hou

Abstract:

Industry 4.0 is a vision of integrated industry implemented by artificial intelligent computing, software, and Internet technologies. The main goal of industry 4.0 is to deal with the difficulty owing to competitive pressures in the marketplace. For today’s manufacturing factories, the type of production is changed from mass production (high quantity production with low product variety) to medium quantity-high variety production. To offer flexibility, better quality control, and improved productivity, robot manipulators are used to combine material processing, material handling, and part positioning systems into an integrated manufacturing system. To implement the automated system for sheet metal bending operations, motion planning of a 7-degrees of freedom (DOF) robot is studied in this paper. A virtual reality (VR) environment of a bending cell, which consists of the robot and a bending machine, is established using the virtual robot experimentation platform (V-REP) simulator. For sheet metal bending operations, the robot only needs six DOFs for the pick-and-place or tracking tasks. Therefore, this 7 DOF robot has more DOFs than the required to execute a specified task; it can be called a redundant robot. Therefore, this robot has kinematic redundancies to deal with the task-priority problems. For redundant robots, Pseudo-inverse of the Jacobian is the most popular motion planning method, but the pseudo-inverse methods usually lead to a kind of chaotic motion with unpredictable arm configurations as the Jacobian matrix lose ranks. To overcome the above problem, we proposed a method to formulate the motion planning problems as optimization problem. Moreover, a genetic algorithm (GA) based method is proposed to deal with motion planning of the redundant robot. Simulation results validate the proposed method feasible for motion planning of the redundant robot in an automated sheet-metal bending operations.

Keywords: redundant robot, motion planning, genetic algorithm, obstacle avoidance

Procedia PDF Downloads 134
19390 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 119
19389 Silver-Doped Magnetite Titanium Oxide Nanoparticles for Photocatalytic Degradation of Organic Pollutants

Authors: Hanna Abbo, Siyasanga Noganta, Salam Titinchi

Abstract:

The global lack of clean water for human sanitation and other purposes has become an emerging dilemma for human beings. The presence of organic pollutants in wastewater produced by textile industries, leather manufacturing and chemical industries is an alarming matter for a safe environment and human health. For the last decades, conventional methods have been applied for the purification of water but due to industrialization these methods fall short. Advanced oxidation processes and their reliable application in degradation of many contaminants have been reported as a potential method to reduce and/or alleviate this problem. Lately it has been assumed that incorporation of some metal nanoparticles such as magnetite nanoparticles as photocatalyst for Fenton reaction which could improve the degradation efficiency of contaminants. Core/shell nanoparticles, are extensively studied because of their wide applications in the biomedical, drug delivery, electronics fields and water treatment. The current study is centred on the synthesis of silver-doped Fe3O4/SiO2/TiO2 photocatalyst. Magnetically separable Fe3O4@SiO2@TiO2 composite with core–shell structure were synthesized by the deposition of uniform anatase TiO2 NPs on Fe3O4@SiO2 by using titanium butoxide (TBOT) as titanium source. Then, the silver is doped on SiO2 layer by hydrothermal method. Integration of magnetic nanoparticles was suggested to avoid the post separation difficulties associated with the powder form of the TiO2 catalyst, increase of the surface area and adsorption properties. The morphology, structure, composition, and magnetism of the resulting composites were characterized and their photocatalytic activities were also evaluated. The results demonstrate that TiO2 NPs were uniformly deposited on the Fe3O4@SiO2 surface. The silver nanoparticles were also uniformly distributed on the surface of TiO2 nanoparticles. The aim of this work is to study the suitability of photocatalysis for the treatment of aqueous streams containing organic pollutants such as methylene blue which is selected as a model compound to represent one of the pollutants existing in wastewaters. Various factors such as initial pollutant concentration, photocatalyst dose and wastewater matrix were studied for their effect on the photocatalytic degradation of the organic model pollutants using the as synthesized catalysts and compared with the commercial titanium dioxide (Aeroxide P25). Photocatalysis was found to be a potential purification method for the studied pollutant also in an industrial wastewater matrix with the removal percentages of over 81 % within 15 minutes. Methylene blue was removed most efficiently and its removal consumed the least of energy in terms of the specific applied energy. The magnetic Ag/SiO2/TiO2 composites show high photocatalytic performance and can be recycled three times by magnetic separation without major loss of activity, which meant that they can be used as efficient and conveniently renewable photocatalyst.

Keywords: Magnetite nanoparticles, Titanium, Photocatalyst, Organic pollutant, Water treatment

Procedia PDF Downloads 253
19388 Effect of Epoxy-ZrP Nanocomposite Top Coating on Inorganic Barrier Layer

Authors: Haesook Kim, Ha Na Ra, Mansu Kim, Hyun Gi Kim, Sung Soo Kim

Abstract:

Epoxy-ZrP (α-zirconium phosphate) nanocomposites were coated on inorganic barrier layer such as sputtering and atomic layer deposition (ALD) to improve the barrier properties and protect the layer. ZrP nanoplatelets were synthesized using a reflux method and exfoliated in the polymer matrix. The barrier properties of coating layer were characterized by measuring water vapor transmission rate (WVTR). The WVTR dramatically decreased after epoxy-ZrP nanocomposite coating, while maintaining the optical properties. It was also investigated the effect of epoxy-ZrP coating on inorganic layer after bending and reliability test. The optimal structure composed of inorganic and epoxy-ZrP nanocomposite layers was used in organic light emitting diodes (OLED) encapsulation.

Keywords: α-zirconium phosphate, barrier properties, epoxy nanocomposites, OLED encapsulation

Procedia PDF Downloads 349
19387 Parametric Influence and Optimization of Wire-EDM on Oil Hardened Non-Shrinking Steel

Authors: Nixon Kuruvila, H. V. Ravindra

Abstract:

Wire-cut Electro Discharge Machining (WEDM) is a special form of conventional EDM process in which electrode is a continuously moving conductive wire. The present study aims at determining parametric influence and optimum process parameters of Wire-EDM using Taguchi’s Technique and Genetic algorithm. The variation of the performance parameters with machining parameters was mathematically modeled by Regression analysis method. The objective functions are Dimensional Accuracy (DA) and Material Removal Rate (MRR). Experiments were designed as per Taguchi’s L16 Orthogonal Array (OA) where in Pulse-on duration, Pulse-off duration, Current, Bed-speed and Flushing rate have been considered as the important input parameters. The matrix experiments were conducted for the material Oil Hardened Non Shrinking Steel (OHNS) having the thickness of 40 mm. The results of the study reveals that among the machining parameters it is preferable to go in for lower pulse-off duration for achieving over all good performance. Regarding MRR, OHNS is to be eroded with medium pulse-off duration and higher flush rate. Finally, the validation exercise performed with the optimum levels of the process parameters. The results confirm the efficiency of the approach employed for optimization of process parameters in this study.

Keywords: dimensional accuracy (DA), regression analysis (RA), Taguchi method (TM), volumetric material removal rate (VMRR)

Procedia PDF Downloads 397
19386 The Effects of Alkalization to the Mechanical Properties of Biocomposite PLA reinforced the Ijuk Fibers

Authors: Mochamad Chalid, Imam Prabowo

Abstract:

The pollution due to non-degradable material such as plastics, has led to studies about the development of environmental-friendly material. Because of biodegradability obtained from natural sources, polylactid acid (PLA) and ijuk fiber are interesting to modify into a composite. This material is also expected to reduce the impact of environmental pollution. Surface modification of ijuk fiber through alkalinization with 0.25 M NaOH solution for 30 minutes, was aimed to enhance it’s compatibility to PLA, in order to improve properties of the composite such as the mechanical properties. Alkalinization of the ijuk fibers annihilates some surface components such as lignin, wax and hemicelloluse, so the pore on the surface clearly appeared, decreasing of the density and diameter of the ijuk fibers. The change of the ijuk fiber properties leads to increase the mechanical properties of PLA composites reinforced the ijuk fibers through strengthening of the mechanical interlocking with the PLA matrix. An addition to enhance the distribution of the fibers in the PLA matrix, the stirring during DCM solvent evaporation from the mixture of the ijuk fibers and the dissolved-PLA can reduce amount of the trapped-voids and fibers pull-out phenomena, which can decrease the mechanical properties of the composite.

Keywords: polylactic acid, Arenga pinnata, alkalinization, compatibility, adhesion, morphology, mechanical properties, volume fraction, distributiom

Procedia PDF Downloads 361
19385 Development of 3D Particle Method for Calculating Large Deformation of Soils

Authors: Sung-Sik Park, Han Chang, Kyung-Hun Chae, Sae-Byeok Lee

Abstract:

In this study, a three-dimensional (3D) Particle method without using grid was developed for analyzing large deformation of soils instead of using ordinary finite element method (FEM) or finite difference method (FDM). In the 3D Particle method, the governing equations were discretized by various particle interaction models corresponding to differential operators such as gradient, divergence, and Laplacian. The Mohr-Coulomb failure criterion was incorporated into the 3D Particle method to determine soil failure. The yielding and hardening behavior of soil before failure was also considered by varying viscosity of soil. First of all, an unconfined compression test was carried out and the large deformation following soil yielding or failure was simulated by the developed 3D Particle method. The results were also compared with those of a commercial FEM software PLAXIS 3D. The developed 3D Particle method was able to simulate the 3D large deformation of soils due to soil yielding and calculate the variation of normal and shear stresses following clay deformation.

Keywords: particle method, large deformation, soil column, confined compressive stress

Procedia PDF Downloads 564
19384 Effect of Modulation Factors on Tomotherapy Plans and Their Quality Analysis

Authors: Asawari Alok Pawaskar

Abstract:

This study was aimed at investigating quality assurance (QA) done with IBA matrix, the discrepan­cies observed for helical tomotherapy plans. A selection of tomotherapy plans that initially failed the with Matrix process was chosen for this investigation. These plans failed the fluence analysis as assessed using gamma criteria (3%, 3 mm). Each of these plans was modified (keeping the planning constraints the same), beamlets rebatched and reoptimized. By increasing and decreasing the modula­tion factor, the fluence in a circumferential plane as measured with a diode array was assessed. A subset of these plans was investigated using varied pitch values. Factors for each plan that were examined were point doses, fluences, leaf opening times, planned leaf sinograms, and uniformity indices. In order to ensure that the treatment constraints remained the same, the dose-volume histograms (DVHs) of all the modulated plans were compared to the original plan. It was observed that a large increase in the modulation factor did not significantly improve DVH unifor­mity, but reduced the gamma analysis pass rate. This also increased the treatment delivery time by slowing down the gantry rotation speed which then increases the maximum to mean non-zero leaf open time ratio. Increasing and decreasing the pitch value did not substantially change treatment time, but the delivery accuracy was adversely affected. This may be due to many other factors, such as the complexity of the treatment plan and site. Patient sites included in this study were head and neck, breast, abdomen. The impact of leaf tim­ing inaccuracies on plans was greater with higher modulation factors. Point-dose measurements were seen to be less susceptible to changes in pitch and modulation factors. The initial modulation factor used by the optimizer, such that the TPS generated ‘actual’ modulation factor within the range of 1.4 to 2.5, resulted in an improved deliverable plan.

Keywords: dose volume histogram, modulation factor, IBA matrix, tomotherapy

Procedia PDF Downloads 166
19383 Simulation Study on Effects of Surfactant Properties on Surfactant Enhanced Oil Recovery from Fractured Reservoirs

Authors: Xiaoqian Cheng, Jon Kleppe, Ole Torsaeter

Abstract:

One objective of this work is to analyze the effects of surfactant properties (viscosity, concentration, and adsorption) on surfactant enhanced oil recovery at laboratory scale. The other objective is to obtain the functional relationships between surfactant properties and the ultimate oil recovery and oil recovery rate. A core is cut into two parts from the middle to imitate the matrix with a horizontal fracture. An injector and a producer are at the left and right sides of the fracture separately. The middle slice of the core is used as the model in this paper, whose size is 4cm x 0.1cm x 4.1cm, and the space of the fracture in the middle is 0.1 cm. The original properties of matrix, brine, oil in the base case are from Ekofisk Field. The properties of surfactant are from literature. Eclipse is used as the simulator. The results are followings: 1) The viscosity of surfactant solution has a positive linear relationship with surfactant oil recovery time. And the relationship between viscosity and oil production rate is an inverse function. The viscosity of surfactant solution has no obvious effect on ultimate oil recovery. Since most of the surfactant has no big effect on viscosity of brine, the viscosity of surfactant solution is not a key parameter of surfactant screening for surfactant flooding in fractured reservoirs. 2) The increase of surfactant concentration results a decrease of oil recovery rate and an increase of ultimate oil recovery. However, there are no functions could describe the relationships. Study on economy should be conducted because of the price of surfactant and oil. 3) In the study of surfactant adsorption, assume that the matrix wettability is changed to water-wet when the surfactant adsorption is to the maximum at all cases. And the ratio of surfactant adsorption and surfactant concentration (Cads/Csurf) is used to estimate the functional relationship. The results show that the relationship between ultimate oil recovery and Cads/Csurf is a logarithmic function. The oil production rate has a positive linear relationship with exp(Cads/Csurf). The work here could be used as a reference for the surfactant screening of surfactant enhanced oil recovery from fractured reservoirs. And the functional relationships between surfactant properties and the oil recovery rate and ultimate oil recovery help to improve upscaling methods.

Keywords: fractured reservoirs, surfactant adsorption, surfactant concentration, surfactant EOR, surfactant viscosity

Procedia PDF Downloads 162
19382 The Implementation of Secton Method for Finding the Root of Interpolation Function

Authors: Nur Rokhman

Abstract:

A mathematical function gives relationship between the variables composing the function. Interpolation can be viewed as a process of finding mathematical function which goes through some specified points. There are many interpolation methods, namely: Lagrange method, Newton method, Spline method etc. For some specific condition, such as, big amount of interpolation points, the interpolation function can not be written explicitly. This such function consist of computational steps. The solution of equations involving the interpolation function is a problem of solution of non linear equation. Newton method will not work on the interpolation function, for the derivative of the interpolation function cannot be written explicitly. This paper shows the use of Secton method to determine the numerical solution of the function involving the interpolation function. The experiment shows the fact that Secton method works better than Newton method in finding the root of Lagrange interpolation function.

Keywords: Secton method, interpolation, non linear function, numerical solution

Procedia PDF Downloads 367
19381 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method

Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili

Abstract:

The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.

Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method

Procedia PDF Downloads 191
19380 Characterization of A390 Aluminum Alloy Produced at Different Slow Shot Speeds Using Assisted Vacuum High-Pressure Die Casting

Authors: Wenbo Yu, Zihao Yuan, Zhipeng Guo, Shoumei Xiong

Abstract:

Under different slow shot speeds in vacuum assisted high pressure die casting (VHPDC) process, plate-shaped specimens of hypereutectic A390 aluminum alloy were produced. According to the results, the vacuum pressure inside the die cavity increased linearly with the increasing slow shot speed at the beginning of mold filling. Meanwhile, it was found that the tensile properties of vacuum die castings were deteriorated by the porosity content. In addition, the average primary Si size varies between 14µm to 23µm, which has a binary functional relationship with the slow shot speeds. Due to the vacuum effect, the castings were treated by T6 heat treatment. After heat treatment, microstructural morphologies revealed that needle-shaped and thin-flaked eutectic Si particles became rounded while Al2Cu dissolved into α-Al matrix. For the as-received sample in-situ tensile test, microcracks firstly initiate at the primary Si particles and propagated along Al matrix with a transgranular fracture mode. In contrast, for the treated sample, the crack initiated at the Al2Cu particles and propagated along Al grain boundaries with an intergranular fracture mode. In-situ three bending test, microcracks firstly formed in the primary Si particles for both samples. Subsequently, the cracks between primary Si linked along Al grain boundaries in as received sample. In contrast, the cracks in primary Si linked through the solid lines in Al matrix. Furthermore, the fractography revealed that the fracture mechanism has evolved from brittle transgranular fracture to a fracture mode with many dimples after heat treatment.

Keywords: A390 aluminum, vacuum assisted high pressure die casting, heat treatment, mechanical properties

Procedia PDF Downloads 236
19379 Staphylococcus argenteus: An Emerging Subclinical Bovine Mastitis Pathogen in Thailand

Authors: Natapol Pumipuntu

Abstract:

Staphylococcus argenteus is the emerging species of S. aureus complex. It was generally misidentified as S. aureus by standard techniques and their features. S. argenteus is possibly emerging in both humans and animals, as well as increasing worldwide distribution. The objective of this study was to differentiate and identify S. argenteus from S. aureus, which has been collected and isolated from milk samples of subclinical bovine mastitis cases in Maha Sarakham province, Northeastern of Thailand. Twenty-one isolates of S. aureus, which confirmed by conventional methods and immune-agglutination method were analyzed by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) and multilocus sequence typing (MLST). The result from MALDI-TOF MS and MLST showed 6 from 42 isolates were confirmed as S. argenteus, and 36 isolates were S. aureus, respectively. This study indicated that the identification and classification method by using MALDI-TOF MS and MLST could accurately differentiate the emerging species, S. argenteus, from S. aureus complex which usually misdiagnosed. In addition, the identification of S. argenteus seems to be very limited despite the fact that it may be the important causative pathogen in bovine mastitis as well as pathogenic bacteria in food and milk. Therefore, it is very necessary for both bovine medicine and veterinary public health to emphasize and recognize this bacterial pathogen as the emerging disease of Staphylococcal bacteria and need further study about S. argenteus infection.

Keywords: Staphylococcus argenteus, subclinical bovine mastitis, Staphylococcus aureus complex, mass spectrometry, MLST

Procedia PDF Downloads 139
19378 Ductility Spectrum Method for the Design and Verification of Structures

Authors: B. Chikh, L. Moussa, H. Bechtoula, Y. Mehani, A. Zerzour

Abstract:

This study presents a new method, applicable to evaluation and design of structures has been developed and illustrated by comparison with the capacity spectrum method (CSM, ATC-40). This method uses inelastic spectra and gives peak responses consistent with those obtained when using the nonlinear time history analysis. Hereafter, the seismic demands assessment method is called in this paper DSM, Ductility Spectrum Method. It is used to estimate the seismic deformation of Single-Degree-Of-Freedom (SDOF) systems based on DDRS, Ductility Demand Response Spectrum, developed by the author.

Keywords: seismic demand, capacity, inelastic spectra, design and structure

Procedia PDF Downloads 387